feat: migrate from ai/rsc to ai/ui (useChat)#579
Conversation
Replace the experimental ai/rsc streaming-UI pattern with the stable ai/ui pattern using useChat hook and route handler. This removes all server-side React component streaming in favor of client-side rendering. - Create /api/chat/stream route handler with streamText - Create ChatProvider wrapping useChat for shared chat state - Refactor all tools to return JSON data (no uiStream) - Refactor all agents to return data (no UI rendering) - Migrate all client components from RSC hooks to useChat - Create ToolResultRenderer for client-side tool UI mapping - Remove createAI, createStreamableUI, StreamableValue usage - Zero remaining ai/rsc imports Closes #59 Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughReplaces RSC-based streaming AI actions with a client-side ChatProvider and a new authenticated Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (Browser)
participant Provider as ChatProvider (client)
participant API as /api/chat/stream (server)
participant Task as taskManager / researcher / writer
participant DB as saveChat (DB)
rect rgba(200,230,255,0.5)
Client->>Provider: User submits message (+ optional fileData)
Provider->>API: POST { chatId, messages, body }
end
rect rgba(200,255,200,0.5)
API->>Task: route by action (resolution_search / inquire / proceed)
Task-->>API: streaming annotations & final answer (tool_result, 0:, related)
API->>Client: streaming frames (annotations, answer, terminator)
end
rect rgba(255,230,200,0.5)
API->>DB: saveChat(conversation) (async)
DB-->>API: ack
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📋 Issue PlannerBuilt with CodeRabbit's Coding Plans for faster development and fewer bugs. View plan used: ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoMigrate from ai/rsc to ai/ui pattern with useChat hook and route handler
WalkthroughsDescription• Migrate from experimental ai/rsc to stable ai/ui pattern using useChat hook • Create /api/chat/stream route handler with streamText for server-side streaming • Implement ChatProvider context wrapping useChat for shared chat state management • Refactor all tools and agents to return JSON data instead of UI components • Create ToolResultRenderer for client-side tool result to UI component mapping • Remove all createAI, createStreamableUI, StreamableValue dependencies Diagramflowchart LR
A["ai/rsc Pattern<br/>Server Actions"] -->|Migrate| B["ai/ui Pattern<br/>useChat Hook"]
C["createStreamableUI<br/>createStreamableValue"] -->|Replace| D["ChatProvider<br/>Context API"]
E["Server Actions<br/>submit/clearChat"] -->|Replace| F["/api/chat/stream<br/>Route Handler"]
G["Tools return UI<br/>Components"] -->|Refactor| H["Tools return JSON<br/>Data only"]
I["useUIState<br/>useAIState"] -->|Replace| J["useChatContext<br/>Hook"]
F -->|Streams| D
D -->|Provides| J
H -->|Rendered by| K["ToolResultRenderer<br/>Component"]
File Changes1. app/api/chat/stream/route.ts
|
Code Review by Qodo
1. Resolution results not rendered
|
| // Render inquiry annotation if present | ||
| const inquiry = annotations.find((a: Annotation) => a.type === 'inquiry') | ||
| if (inquiry) { | ||
| const { Copilot } = require('./copilot') | ||
| renderedMessages.push({ | ||
| id: 'inquiry', | ||
| component: <Copilot inquiry={{ value: inquiry.data }} /> | ||
| }) | ||
| } | ||
|
|
||
| // Render related queries annotation | ||
| const related = annotations.findLast?.((a: Annotation) => a.type === 'related') | ||
| if (related && related.relatedQueries?.items?.length > 0) { | ||
| const SearchRelated = require('./search-related').default | ||
| const { Section: SectionComp } = require('./section') | ||
| renderedMessages.push({ | ||
| id: 'related', | ||
| component: ( | ||
| <SectionComp title="Related" separator={true}> | ||
| <SearchRelated relatedQueries={related.relatedQueries} /> | ||
| </SectionComp> | ||
| ) | ||
| }) | ||
| } |
There was a problem hiding this comment.
1. Resolution results not rendered 🐞 Bug ≡ Correctness
/api/chat/stream emits a resolution_search_result data annotation, but the client never renders that annotation type, so resolution-search UI (carousel/GeoJSON/map preview) will not appear.
Agent Prompt
## Issue description
Resolution search results are sent as `type: 'resolution_search_result'` annotations, but the UI never renders them, so users only see the summary text and lose the map/imagery output.
## Issue Context
The route handler emits a `resolution_search_result` annotation containing `{ image, mapboxImage, googleImage, geoJson?, ... }`. The client currently only handles `tool_result`, `inquiry`, and `related` annotations.
## Fix Focus Areas
- components/chat-messages.tsx[77-100]
- app/api/chat/stream/route.ts[235-260]
## What to implement
- Add a renderer branch for `annotation.type === 'resolution_search_result'`.
- Reuse existing UI components (e.g., `ResolutionCarousel`, `GeoJsonLayer`) to display imagery + optional GeoJSON.
- Ensure rendering order matches expectations (e.g., show carousel/overlay before/alongside summary).
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| const content: CoreMessage['content'] = [ | ||
| { type: 'text', text: 'Analyze this map view.' }, | ||
| { type: 'image', image: fileData, mimeType: 'image/png' } | ||
| ] | ||
| messages.push({ role: 'user', content }) | ||
|
|
There was a problem hiding this comment.
4. Base64 images saved to db 🐞 Bug ➹ Performance
Resolution-search pushes fileData (a base64 data URL) into CoreMessage.content and saveChatAsync persists it verbatim; saveChat then JSON-stringifies it into the messages.content text column, risking extreme DB bloat and slow history loads.
Agent Prompt
## Issue description
Resolution-search base64 images are being persisted to the `messages` table, which can massively increase DB size and make chat history loads slow or fail.
## Issue Context
- The route inserts `{ type: 'image', image: fileData }` into `CoreMessage.content`.
- `saveChatAsync` persists that `content` verbatim.
- `saveChat` JSON-stringifies object content into `messages.content` (text).
## Fix Focus Areas
- app/api/chat/stream/route.ts[209-214]
- app/api/chat/stream/route.ts[274-337]
- lib/actions/chat.ts[106-112]
## What to implement
- Before calling `saveChatAsync` (or inside it), sanitize message contents:
- Remove image parts entirely from messages being saved, OR
- Replace the `image` field with a constant placeholder (e.g., `"IMAGE_PROCESSED"`).
- If you still need to persist images, store them outside the `messages` table (blob storage) and only persist a reference URL/id.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| const initialMessages: Message[] = dbMessages.map((dbMsg): Message => ({ | ||
| id: dbMsg.id, | ||
| role: dbMsg.role as Message['role'], | ||
| content: dbMsg.content, | ||
| createdAt: dbMsg.createdAt ? new Date(dbMsg.createdAt) : undefined, | ||
| })); |
There was a problem hiding this comment.
5. History drops tool outputs 🐞 Bug ≡ Correctness
Chat history pages load DB messages directly into useChat, but ChatMessages only renders user/assistant roles and the DB schema doesn’t store tool metadata (e.g., toolName), so persisted tool outputs and other non-text artifacts can’t be reconstructed on reload.
Agent Prompt
## Issue description
Tool outputs and other non-text UI artifacts are not renderable from persisted chat history: the DB doesn’t store tool metadata, and the UI ignores non-user/assistant roles.
## Issue Context
- History page loads `dbMessages` into `Message[]`.
- `ChatMessages` only renders `user` and `assistant`.
- DB schema stores only `role` + `content` (text), so toolName/type/annotation details are lost.
## Fix Focus Areas
- app/search/[id]/page.tsx[39-46]
- components/chat-messages.tsx[38-75]
- lib/db/schema.ts[52-61]
- lib/actions/chat.ts[106-112]
## What to implement
Choose one approach:
1) **Schema approach (recommended)**: add columns to persist `type` and `name` (toolName) (and any structured payload refs), then rehydrate tool messages into UI via `ToolResultRenderer`.
2) **Content-encoding approach**: persist a JSON envelope in `content` that includes `{type,name,payload}` and parse it on load; update `ChatMessages` to render those envelopes.
Additionally:
- Filter/transform loaded messages so the UI can render tool results and other artifacts deterministically.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Actionable comments posted: 37
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
lib/agents/tools/geospatial.tsx (2)
37-41:⚠️ Potential issue | 🟠 MajorAvoid logging secret prefixes.
Logging the first 8 characters of
COMPOSIO_API_KEY,MAPBOX_ACCESS_TOKEN, andCOMPOSIO_USER_IDstill leaks information to every log aggregator that ingests server logs, and key prefixes are often enough to identify the account/tier (some providers encode tenancy in the prefix). Use a boolean presence indicator instead.🛡️ Proposed fix
console.log('[GeospatialTool] Environment check:', { - composioApiKey: composioApiKey ? `${composioApiKey.substring(0, 8)}...` : 'MISSING', - mapboxAccessToken: mapboxAccessToken ? `${mapboxAccessToken.substring(0, 8)}...` : 'MISSING', - composioUserId: composioUserId ? `${composioUserId.substring(0, 8)}...` : 'MISSING', + composioApiKey: composioApiKey ? 'present' : 'MISSING', + mapboxAccessToken: mapboxAccessToken ? 'present' : 'MISSING', + composioUserId: composioUserId ? 'present' : 'MISSING', });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/tools/geospatial.tsx` around lines 37 - 41, The console.log in the GeospatialTool environment check is leaking secret prefixes by printing substrings of composioApiKey, mapboxAccessToken, and composioUserId; change this to log only boolean presence checks (e.g., !!composioApiKey, !!mapboxAccessToken, !!composioUserId) or descriptive status strings like "present"/"missing" in the same console.log call so no parts of secrets are emitted (locate the block referencing composioApiKey, mapboxAccessToken, composioUserId and replace the substring masking with presence indicators).
50-57: 🧹 Nitpick | 🔵 TrivialPrefer static JSON import over
require().
require('../../../mapbox_mcp_config.json')bypasses module resolution checks and won't be tree-shaken or type-checked. The project's TypeScript config has"resolveJsonModule": trueenabled, allowing direct static JSON imports. The nested try/catch is also redundant—the inner catch simply rethrows, and theconfigvariable is never used after assignment, making it safe to remove entirely.♻️ Proposed refactor
+import mapboxMcpConfig from '../../../mapbox_mcp_config.json'; @@ - let config; - try { - let mapboxMcpConfig; - try { - mapboxMcpConfig = require('../../../mapbox_mcp_config.json'); - config = { ...mapboxMcpConfig, mapboxAccessToken }; - console.log('[GeospatialTool] Config loaded successfully'); - } catch (configError: any) { - throw configError; - } - } catch (configError: any) { - console.error('[GeospatialTool] Failed to load mapbox config:', configError.message); - config = { mapboxAccessToken, version: '1.0.0', name: 'mapbox-mcp-server' }; - console.log('[GeospatialTool] Using fallback config'); - }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/tools/geospatial.tsx` around lines 50 - 57, Replace the dynamic require of the mapbox JSON and the redundant try/catch with a static JSON import using the project's resolveJsonModule support: remove the require('../../../mapbox_mcp_config.json') call and the surrounding try/catch, statically import the JSON (mapbox_mcp_config.json) at the top of the module, drop the unused local variable config and/or mapboxMcpConfig if not used elsewhere, and use the imported object directly where needed (referencing mapbox_mcp_config.json and any code that previously referenced mapboxMcpConfig or config).lib/agents/tools/retrieve.tsx (1)
9-35:⚠️ Potential issue | 🟠 MajorRobustness gaps in the error path and missing timeout/status check.
Two issues worth addressing while this execute function is being rewritten:
- Line 34:
\${error} "${url}".`coerces theErrorviatoString(), which for some runtimes produces"[object Object]"or loses the underlyingcause`. Extract the message explicitly.- Lines 10–17: No
response.okcheck and noAbortControllertimeout onfetch. A hung Jina endpoint will stall the tool (and therefore the whole chat turn) indefinitely; a 4xx/5xx with a non-JSON body falls through to the generic catch, masking the real status.🛡️ Suggested hardening
execute: async ({ url }: { url: string }) => { let results: SearchResultsType | undefined + const controller = new AbortController() + const timeout = setTimeout(() => controller.abort(), 15_000) try { const response = await fetch(`https://r.jina.ai/${url}`, { method: 'GET', headers: { Accept: 'application/json', 'X-With-Generated-Alt': 'true' - } + }, + signal: controller.signal }) + if (!response.ok) { + return { error: `Retrieve failed for "${url}" (HTTP ${response.status}).` } + } const json = await response.json() if (!json.data || json.data.length === 0) { return { error: `An error occurred while retrieving "${url}". This website may not be supported.` } } results = { results: [ { title: json.data.title, content: json.data.content, url: json.data.url } ], query: '', images: [] } } catch (error) { console.error('Retrieve API error:', error) - return { error: `${error} "${url}".` } + const message = error instanceof Error ? error.message : String(error) + return { error: `Failed to retrieve "${url}": ${message}` } + } finally { + clearTimeout(timeout) } return results }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/tools/retrieve.tsx` around lines 9 - 35, The catch path is coercing the Error object into a string and the fetch has no status or timeout handling; update the fetch call (where you call fetch(`https://r.jina.ai/${url}`) and call response.json()) to use an AbortController with a short timeout (e.g., 5s), check response.ok and return a clear error when status is not ok (include response.status and statusText), safely parse JSON with a try/catch and handle non-JSON bodies, and in the catch block return the actual error.message (and error.cause if present) rather than using `${error}` so the returned error includes the real message and URL.components/mobile-icons-bar.tsx (1)
42-44:⚠️ Potential issue | 🟠 Major
setMessages([])alone is insufficient—the user remains on/search/[oldId]with the oldchatId, causing the next message to append to the previous conversation.The Chat component only auto-navigates to
/search/[id]when messages.length reaches 1 and the user is not already on a search route (line 65–67 in chat.tsx:if (!path.includes('search') && messages.length === 1)). When a user clicks "New Chat" on an existing chat page, this condition cannot trigger, so no navigation occurs. The next message will be sent with the stalechatIdand persisted to the old chat.Add
router.push('/')to navigate back to home (which generates a freshid), or generate a new ID and navigate to/search/[newId]. Also, remove theasynckeyword fromhandleNewChatsince there are no awaits.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/mobile-icons-bar.tsx` around lines 42 - 44, handleNewChat currently only calls setMessages([]) which leaves the user on the old /search/[oldId] so subsequent messages append to the previous chat; update handleNewChat (remove the unnecessary async) to also navigate away so a fresh chatId is created — either call router.push('/') after clearing messages or generate a new id and call router.push(`/search/${newId}`) (keep setMessages([]) as-is), ensuring you reference the handleNewChat function and the Chat component's routing behavior when making the change.components/search-section.tsx (1)
14-39:⚠️ Potential issue | 🟠 Major
JSON.parsewithout error handling, and tool-error shape isn't handled.Two concerns on the new synchronous parse:
JSON.parse(result)will throw on any malformed input; a single bad tool output would crash the surrounding chat message render (and potentially the whole message list). Wrap in atry/catch.- Per the sibling changes in
lib/agents/tools/search.tsx, a failed search now returns{ error: "An error occurred while searching for \"<query>\"." }. That object is a valid JSON parse, but it has noquery/images/results, so${searchResults.query}rendersundefinedand<SearchResults results={searchResults.results} />receivesundefined. An explicit error branch would render a proper error state instead of a broken skeleton / crash.🛡️ Proposed fix
-export function SearchSection({ result }: SearchSectionProps) { - const searchResults: TypeSearchResults | undefined = result ? JSON.parse(result) : undefined - return ( - <div> - {searchResults ? ( +export function SearchSection({ result }: SearchSectionProps) { + let parsed: (TypeSearchResults & { error?: string }) | undefined + if (result) { + try { + parsed = JSON.parse(result) + } catch (e) { + console.error('SearchSection: failed to parse result JSON', e) + } + } + const searchResults = parsed && !('error' in parsed) ? parsed : undefined + return ( + <div> + {parsed && 'error' in parsed ? ( + <Section className="pt-2 pb-0"> + <p className="text-sm text-muted-foreground">{parsed.error}</p> + </Section> + ) : searchResults ? (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/search-section.tsx` around lines 14 - 39, Wrap the JSON.parse in SearchSection with a try/catch and fall back to a safe undefined/parseError state instead of letting a thrown parse error bubble out; after parsing, explicitly check for an error shape (e.g., searchResults?.error) and render an error branch (for example show a ToolBadge or a Section with the error message) rather than attempting to use searchResults.query, searchResults.images, or searchResults.results when they may be undefined; update the JSX branches in SearchSection to use the parseError / error-object checks so SearchResultsImageSection and SearchResults only receive valid arrays/strings.lib/agents/tools/index.tsx (1)
12-24: 🧹 Nitpick | 🔵 TrivialLGTM — clean removal of RSC streaming plumbing.
Factory signatures align with the updated
searchTool/retrieveTool/videoSearchTool/geospatialTooldefinitions. Minor nit:tools: anyloses the strong typing thatToolSetfrom theaipackage would give here — considerRecord<string, ReturnType<typeof searchTool> | ...>or the SDK's tool type helper if you wantstreamText({ tools })to catch signature regressions at compile time.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/tools/index.tsx` around lines 12 - 24, The tools object in getTools is typed as any which loses compile-time guarantees; replace tools: any with a stronger type such as the SDK's ToolSet (from 'ai') or a union Record type using ReturnType for each factory (e.g., Record<string, ReturnType<typeof searchTool> | ReturnType<typeof retrieveTool> | ReturnType<typeof videoSearchTool> | ReturnType<typeof geospatialTool>>) so streamText({ tools }) and other callers get proper type checking; update the signature of getTools (and ToolProps if needed) to reflect the chosen type.components/chat-panel.tsx (1)
86-105:⚠️ Potential issue | 🔴 CriticalAttached files are silently dropped on submit.
The UI accepts and displays attachments (lines 64–73, 265–276) but
handleSubmitonly sends{ role: 'user', content: userContent }—selectedFileis cleared (clearAttachment()) without being uploaded or included in the requestbody. The submit button is even enabled when only a file is attached (input.length === 0 && !selectedFileon Line 257), so a user can attach a file, click send, and get an empty message with no indication that the file was discarded.The API route handler at
app/api/chat/streamalready expects and supportsfileDatain the request body (destructured at line 31) and processes it viahandleResolutionSearchwhenaction === 'resolution_search'. EncodeselectedFileto base64 and include it asfileDatain the request body, along with anaction: 'resolution_search'parameter.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/chat-panel.tsx` around lines 86 - 105, handleSubmit currently clears selectedFile via clearAttachment() and only appends the user text; modify handleSubmit to include the attached file by base64-encoding selectedFile and passing it as fileData in the append call body alongside mapProvider and drawnFeatures, and include action: 'resolution_search' so the backend route (app/api/chat/stream) can process the file; ensure you still clearAttachment() and setInput('') after the append completes, and keep the submit enabled when only a file is attached by using the existing selectedFile check.app/actions.tsx (1)
1-9: 🧹 Nitpick | 🔵 TrivialRemove or relocate
app/actions.tsx— file is unused and misleadingly named.The file contains only a type export (
AIState) and is never imported anywhere in the codebase. The.tsxextension is misleading since there is no JSX or Server Actions present. Either delete the file or move theAIStatetype to@/lib/types(whereAIMessagealready resides). The "backward compatibility" comment does not reflect current usage — nothing imports from this file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/actions.tsx` around lines 1 - 9, The file exports only the AIState type (referencing AIMessage) and is unused and misnamed as a .tsx; either delete app/actions.tsx or move the AIState type into the existing types module where AIMessage lives (e.g., add export type AIState = { messages: AIMessage[]; chatId: string; isSharePage?: boolean } to the types file and remove the redundant file), and update any imports if you relocate the type.lib/agents/researcher.tsx (1)
134-143:⚠️ Potential issue | 🟠 Major
messagesis mutated in place — caller's retry loop will compound context.
researcherpushes the assistant turn (and any tool responses) onto the caller'smessagesarray. Inapp/api/chat/stream/route.tsthis function is called inside awhileloop that re-invokes it untilanswer.length > 0, so each retry iteration re-sends a growing conversation containing the previous empty-text assistant turn plus its tool calls. This can cause (a) token bloat, (b) the model repeating the same tool call, and (c) the loop failing to converge.Recommend returning the new segments from
researcherand letting the caller decide whether to append (only once, after a successful iteration), or resettingmessageslength before each retry.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/researcher.tsx` around lines 134 - 143, The researcher function currently mutates the caller's messages array by pushing the assistant turn and tool responses (see messages, toolCalls, toolResponses, fullResponse), causing retries to compound context; change researcher to avoid in-place mutation by building and returning the new message segments (e.g., newSegments containing the assistant message and any tool messages) instead of pushing into messages, update the return to include those segments (alongside result, fullResponse, hasError, toolResponses), and let the caller decide whether to append newSegments to messages (only after a successful, non-empty fullResponse) or to reset messages before retries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/api/chat/stream/route.ts`:
- Line 41: currentSystemPrompt is passed directly to researcher and never
augmented with RAG context; call the RAG augmentation flow (use the functions
from lib/actions/rag.ts such as generate_embedding and hybrid_search or your
existing wrapper that returns augmentedSystemPrompt) after fetching
getSystemPrompt(userId) and before invoking researcher, then pass
augmentedSystemPrompt to researcher instead of currentSystemPrompt (reference
symbols: currentSystemPrompt, getSystemPrompt, researcher, generate_embedding,
hybrid_search, augmentedSystemPrompt).
- Around line 45-56: The current mapping that builds messages (variable messages
from clientMessages) strips tool-related data by filtering only roles
'user'|'assistant' and mapping to {role, content}, which loses
assistant.toolInvocations and drops any role === 'tool' turns; update the
construction of CoreMessage[] (the messages array creation) to preserve tool
turns and toolInvocations by: not filtering out role === 'tool', and when
mapping each item (from clientMessages) include toolInvocations and any other
auxiliary fields present on assistant messages (e.g., map to include role,
content, and toolInvocations when present), or add a short clarifying comment if
intentionally discarding tool data, so prior tool outputs and assistant tool
calls are replayed correctly.
- Around line 69-92: The hardcoded reply branch uses fragile exact-match logic
and inconsistent wording; update it to normalize the last user message (trim,
collapse whitespace, remove trailing punctuation and lowercase) before matching
instead of comparing raw lastText, and choose wording consistent with the system
prompt (use "Climate forecasting" everywhere) when building the definition
variable; also guard the branch by asserting messages.length > 0 (or
early-return) before accessing messages[messages.length - 1], and keep the call
to saveChatAsync(chatId, userId, messages, definition) after these checks;
alternatively consider moving these canned replies into the system prompt in
lib/agents/researcher.tsx and only use this branch as a fallback.
- Around line 195-272: handleResolutionSearch currently starts a ReadableStream
whose start callback can throw and those errors escape the outer try/catch, and
saveChatAsync is fire-and-forget; fix by making the stream start an async
function that wraps the body in try/catch/finally, calling controller.error(err)
on any caught error and ensuring controller.close() in success, and in finally
await saveChatAsync(chatId, userId, messages, analysisResult?.summary || '') (or
save an error marker) so chat persistence always runs; refer to
handleResolutionSearch, resolutionSearch, querySuggestor, saveChatAsync and use
controller.error(...) inside the start callback to propagate stream failures to
the response consumer.
- Around line 15-19: The 401 response in POST (app/api/chat/stream/route.ts ->
function POST) returns JSON body but omits Content-Type; update the Response
creation for the unauthorized branch to include the header "Content-Type:
application/json" (or use a JSON helper such as NextResponse.json) so the
response explicitly declares JSON content when returning the Unauthorized
payload.
- Around line 152-156: The main path calls querySuggestor with messages which
may still contain image parts; mirror the sanitisation used in
handleResolutionSearch by creating a sanitizedMessages (filtering out any
message.content parts with type 'image' or equivalent image payloads) and pass
that to querySuggestor instead of raw messages; update the relatedQueries
assignment (variable relatedQueries and call to querySuggestor) to use the
filtered messages so image data is never sent to querySuggestor.
- Around line 129-145: The loop in route.ts using researcher(...) can spin
indefinitely and compounds context because researcher mutates the shared
messages array; add a bounded retry mechanism (e.g., maxAttempts constant and an
attempts counter) and break the loop when attempts >= maxAttempts or when a
retry returns empty fullResponse N times (regardless of useSpecificAPI), and
ensure errorOccurred still influences exit where appropriate; also prevent
in-place mutation of messages by either passing a shallow copy of messages into
researcher(...) or refactoring researcher (in lib/agents/researcher.tsx) to
return the additional assistant/tool-turns instead of pushing them into the
provided messages array so retries don’t inflate context. Ensure you update
usages of answer, errorOccurred, and allToolOutputs accordingly so retries stop
after the maxAttempts and do not duplicate tool outputs.
- Line 113: The stream is sending annotation frames using the "8:[...]" prefix
so annotations end up on individual messages instead of in useChat's data field;
update every controller.enqueue(encoder.encode(`8:[${JSON.stringify(...) }]\n`))
call (the ones emitting tool_result, inquiry, related, resolution_search_result
payloads) to use the data-frame prefix "2:[...]" instead (i.e.,
controller.enqueue(encoder.encode(`2:[${JSON.stringify(...)}]\n`))) so these
payloads populate useChat's data array and get returned by getToolResults(),
getInquiry(), etc.
- Around line 89-91: The streaming Response objects in this route (the four
occurrences that call "new Response(stream, { headers: { 'Content-Type':
'text/plain; charset=utf-8' } })") must include the required protocol header;
update each Response return to add "x-vercel-ai-data-stream": "v1" to the
headers alongside Content-Type so the AI SDK (useChat) can recognize and parse
the stream. Locate the four Response constructions in this file (the streaming
returns around the areas noted) and merge the headers into a single headers
object that includes both 'Content-Type' and 'x-vercel-ai-data-stream': 'v1'.
- Around line 297-317: The saveChatAsync flow is generating new nanoid() for
every message (in the messages -> aiMessages mapping) which breaks dbSaveChat's
onConflictDoUpdate deduping; update the mapping in route.ts to preserve message
identity by using msg.id when present (e.g., aiMessages.push uses msg.id) and
only call nanoid() for messages that lack an id, and ensure tool outputs keep
stable IDs similarly; alternatively (if preserving incoming ids isn't possible),
call the db routine to delete prior messages for the chat before inserting the
replayed history — reference the messages array, aiMessages, nanoid(),
saveChatAsync and dbSaveChat to locate the changes.
- Around line 187-192: The three fire-and-forget calls to saveChatAsync
(occurring just before returning streamed Responses) must be wrapped with
Next.js after() to ensure the DB write completes after the response is sent;
replace direct calls like saveChatAsync(chatId, userId, ...) with after(() =>
saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries))
for each occurrence (the saveChatAsync invocations in the streaming route
handlers) so the runtime will wait for the save to finish even in serverless
environments.
- Line 80: The stream termination currently enqueues only an "e:" finish-step
and immediately closes the controller (see usages of
controller.enqueue(encoder.encode(...)) and controller.close()), which omits the
required "d:" finish-message; after each existing e: message emit a second
controller.enqueue(encoder.encode(...)) that sends the "d:" chunk with the same
finishReason and usage payload (matching the existing e: content) and then call
controller.close(); update all four occurrences where e: is emitted so useChat
receives the final d: message before stream close.
In `@app/search/`[id]/page.tsx:
- Around line 41-46: The initialMessages mapping drops annotations (and possible
toolInvocations), and unsafely casts dbMsg.role, causing
ToolResultRenderer/getToolResults and SearchRelated/getRelatedQueries to have no
data and useChat to get invalid roles; update the mapping in initialMessages to
include annotations and toolInvocations from dbMsg (rehydrate the JSONB column
you added to messages) and validate dbMsg.role against the allowed Message roles
used by useChat (e.g., check against a set like
['user','assistant','system','tool'] and fallback to a safe default) so
annotations are preserved and role values are constrained.
In `@components/chat-messages.tsx`:
- Around line 29-36: The code duplicates provider logic by re-filtering
annotations into toolAnnotations; instead call the provider helper
getToolResults(...) (from the context exposed in chat-provider via
getToolResults) to obtain tool results, then iterate those results to push
entries into renderedMessages with id, component (ToolResultRenderer) and
isCollapsed. Remove the local annotations.filter(...) and use getToolResults to
keep logic centralized and consistent.
- Around line 104-114: The isLastMessage prop is computed from index ===
renderedMessages.length - 1 which can point at inquiry/related/tool rows;
instead, compute the index of the final assistant text/turn when building
renderedMessages and pass isLastMessage only for that assistant message row.
Locate where renderedMessages is created and where CollapsibleMessage is
rendered, determine the last assistant message by scanning renderedMessages for
assistant-type entries (e.g., entries with role/type identifying assistant text)
to get lastAssistantIndex, then change the prop to isLastMessage={index ===
lastAssistantIndex} so only the true assistant response controls
auto-expansion/streaming in CollapsibleMessage.
- Around line 22-36: The keys for ToolResultRenderer are unstable because they
use Math.random(), causing remounts; change the id generation in the
renderedMessages push (the block that builds toolAnnotations and pushes entries
into renderedMessages) to use a stable identifier instead of Math.random()—for
example use an existing annotation unique field (like ann.id), or derive a
deterministic key from ann.toolName plus the annotation index or a hash of
ann.result, and ensure that same stable id is passed as the key to
ToolResultRenderer so React reconciliation preserves component state.
- Around line 77-100: Replace the dynamic require() calls inside the render with
static imports at the top: import Copilot, SearchRelated (or its named export)
and Section (named) instead of using require('./copilot'),
require('./search-related').default and require('./section'); update the
renderedMessages entries to use the imported Copilot, SearchRelated and Section
directly (remove the SectionComp alias), and if you need runtime code-splitting
use next/dynamic to import SearchRelated/Copilot with ssr: false rather than
require() so bundlers and RSC static analysis work correctly.
In `@components/chat-panel.tsx`:
- Around line 30-31: Remove the redundant messages prop from the ChatPanel
component and read messages directly from the context returned by
useChatContext(); specifically, update the ChatPanel signature to drop the
messages parameter and any prop type that includes messages, then replace usages
of the prop with the messages value from const { append, setMessages, messages }
= useChatContext() (or similar) inside the ChatPanel body, and update any
callers to stop passing messages into ChatPanel so there is a single source of
truth.
- Around line 107-110: The handleClear function only calls setMessages([]) and
clearAttachment(), which leaves the current chatId in ChatProvider intact so
subsequent append calls continue writing to the same server-side chat record;
update handleClear to either (A) remount ChatProvider with a fresh chatId by
navigating to a new route (e.g., push('/') or generate a new /search/[newId] and
navigate) so a new server chat is created, or (B) call the server-side
clearChat/delete API for the active chatId before clearing local state; modify
the handleClear implementation (and any callers of setMessages/clearAttachment)
to perform one of these two flows and ensure chatId is reset or the server
record deleted to keep client and server histories in sync.
In `@components/chat-provider.tsx`:
- Around line 59-65: The current annotations const casts useChat().data to
Annotation[] unsafely and ignores that most annotations are emitted under the
route's "8:" message-annotation frames (so annotations will be empty); update
the logic used by annotations and getToolResults to (1) treat useChat().data as
JSONValue[] at runtime and filter with a type guard that verifies each entry is
an Annotation object (has a string "type" and other expected fields) before
casting, and (2) also extract annotations from message frames (entries shaped
like { type: 'message', message: { annotations: [...] } }) so
message.annotations are included; adjust getToolResults to operate on this
validated combined list (refer to useChat().data, the annotations const, and
getToolResults to locate where to change).
- Around line 77-97: The ChatContext.Provider's value object is recreated on
every render causing unnecessary re-renders; wrap the value passed to
<ChatContext.Provider> in a useMemo and list its dependencies (messages, input,
setInput, handleSubmit, append, setMessages, isLoading, chatId, annotations,
getToolResults, getInquiry, getRelatedQueries) so the object identity only
changes when those actual values change; update the component to import/use
React.useMemo and replace the inline value={{...}} with a memoized value
variable used in the Provider.
- Around line 13-26: The ChatContextValue.append signature is too narrow and
uses an any cast; update ChatContextValue to match the SDK signature used by
useChat().append by changing append to accept (message: Message | CreateMessage,
options?: ChatRequestOptions) => Promise<string | null | undefined> and remove
any as any casts where append is forwarded; ensure imports/types for
CreateMessage and ChatRequestOptions are pulled from the SDK and update all
usages that relied on the previous restrictive shape (e.g., callers sending
body: { action: 'resolution_search', fileData }) to compile with the new types.
In `@components/chat.tsx`:
- Around line 70-75: The effect calling router.refresh() fires on mount when the
last persisted message is from the assistant; change the condition to only
refresh when a streaming response just finished by tracking the previous
isLoading state (e.g., a useRef prevIsLoading) and require prevIsLoading ===
true && isLoading === false in the useEffect that inspects messages and
lastMsg.role === 'assistant' before calling router.refresh(); update the
useEffect dependency list accordingly and set prevIsLoading = isLoading at the
end of the effect so only transitions from loading→not-loading trigger the
refresh.
In `@components/collapsible-message.tsx`:
- Around line 28-33: The useEffect currently lists isCollapsed as a dependency
but always resets open to isLastMessage, which overrides user toggles when
isCollapsed changes; update the effect on the component that uses
isCollapsed/open/setOpen/useEffect/isLastMessage so it only responds to changes
in isLastMessage (remove isCollapsed from the dependency array) so open is only
programmatically reset when the message becomes the last message, preserving
user-controlled toggles when isCollapsed flips.
In `@components/copilot.tsx`:
- Around line 69-73: The current code appends the skip sentinel as a visible
user message (append({ role: 'user', content: '{"action": "skip"}' })), which
leaks into UI and persisted chat history; instead send the sentinel as metadata
using the body option (following the drawnFeatures pattern in
followup-panel.tsx) so it isn't rendered or stored as a user-visible
message—update the append call in components/copilot.tsx to append the real user
content (or updatedQuery()) and pass { body: { action: 'skip' } } (or similar)
to convey the skip intent, and ensure any server-side stream/route handler reads
the body metadata rather than relying on message content.
In `@components/followup-panel.tsx`:
- Around line 15-28: The handleSubmit function currently calls append(...)
without error handling and allows duplicate submissions; wrap the append call in
a try/catch/finally inside handleSubmit (use a local submitting state or the
context flag useChatContext().isLoading) to prevent concurrent submissions, set
a submitting boolean true before calling append and false in finally, show or
propagate an error (e.g., set an error state or call a provided notify) from the
catch, and only clear the input via setInput('') on success; also disable the
submit button when submitting or useChatContext().isLoading to guard against
duplicate sends.
In `@components/header-search-button.tsx`:
- Line 160: The button's disabled check uses map from useMap() (Mapbox) and thus
disables the control when mapProvider === 'google' even though
handleResolutionSearch supports Google via mapData.cameraState; update the
disabled expression to be provider-aware (e.g. keep isAnalyzing, then require
map only for Mapbox and require mapData?.cameraState for Google) and apply the
identical change to the mobile button; modify the components using map,
mapProvider, mapData and isAnalyzing so that disabled becomes something like:
disabled={isAnalyzing || (mapProvider === 'mapbox' && !map) || (mapProvider ===
'google' && !mapData?.cameraState)} ensuring you update both the desktop and
mobile button instances that reference disabled.
- Around line 117-128: Update blobToBase64 to handle errors by adding
reader.onerror to reject the Promise on failure and ensure it resolves only with
a non-null string; then avoid duplicate conversions by computing the base64 once
for the chosen blob and reusing it: compute a singleBase64 (using blobToBase64)
from (mapboxBlob || googleBlob) and assign that to fileData, and only compute
mapboxImageData or googleImageData separately if the other provider's blob is
present and different; adjust uses of fileData, mapboxImageData, and
googleImageData accordingly so a single blob isn't read twice.
In `@components/resolution-carousel.tsx`:
- Around line 40-44: The FileReader usage in resolution-carousel.tsx creates a
Promise that only resolves on reader.onloadend which can hang if readAsDataURL
fails; update the Promise in the code that creates fileData (the FileReader
block) to attach reader.onerror and reader.onabort handlers that reject the
Promise with the error/event, and ensure you call reader.onloadend only after
verifying reader.result is usable; also call reader.removeEventListener/clear
handlers or cancel the read on rejection to avoid leaks. By rejecting the
Promise on error/abort the outer await will throw and your existing try/finally
(and setIsAnalyzing(false)) will run as expected.
In `@components/search-related.tsx`:
- Around line 18-20: The related-query handler handleRelatedClick currently
calls append({ role: 'user', content: query }) which omits the same body context
(mapProvider, drawnFeatures) that ChatPanel.handleSubmit includes; update
handleRelatedClick to forward the same default body used by
ChatPanel/ChatProvider (include mapProvider and drawnFeatures along with chatId)
or move the default body construction into ChatProvider and call append with
that shared body so related queries receive the same tool/geospatial context;
reference functions/components: handleRelatedClick, ChatPanel.handleSubmit,
ChatProvider, and append, and ensure the appended payload includes mapProvider
and drawnFeatures fields.
In `@components/tool-result-renderer.tsx`:
- Around line 18-21: The code is doing an unnecessary JSON.stringify/parse
round-trip when rendering search results; update the components to accept the
typed object directly instead of a string: change SearchSection and
VideoSearchSection props from result?: string to result?: TypeSearchResults |
SerperSearchResults (or the appropriate concrete types), update their internal
usages to consume the object type (remove parsing), and then in
components/tool-result-renderer.tsx return <SearchSection result={result} /> and
<VideoSearchSection result={result} /> (also update any other call sites and
imports/types to match the new prop signatures).
- Around line 17-23: ToolResultRenderer must short-circuit error-shaped tool
outputs before dispatching to sections: detect the error shape from the incoming
result (either result?.error for objects or, for the search/video paths where
you stringify, parse the string safely and check for an "error" field) inside
the ToolResultRenderer switch/dispatch logic and return an error UI (or a simple
error message) instead of rendering SearchSection, VideoSearchSection, or
RetrieveSection; reference the toolName switch and the result variable and
handle both object and JSON-stringified error payloads so SearchSection,
VideoSearchSection, and RetrieveSection never receive the { error: string }
shape.
In `@components/video-search-section.tsx`:
- Around line 13-14: The component VideoSearchSection currently calls
JSON.parse(result) directly which can throw and crash the render; wrap the parse
in a try/catch inside VideoSearchSection (protecting the result value) so that
if parsing fails you set searchResults to undefined (or an explicit error flag)
and log the error, then render the existing skeleton/error UI path instead of
letting the exception bubble; update references to searchResults (and any
downstream rendering that expects SerperSearchResults) to handle the fallback
case safely.
In `@lib/actions/suggest.ts`:
- Around line 26-41: Replace the streaming usage with the final-value primitive:
stop calling streamObject and use generateObject to return a single
Promise<PartialRelated> (use generateObject with systemPrompt, messages [{ role:
'user', content: query }], and relatedSchema) and remove the for-await loop that
only kept the last partial; also ensure the model passed from getModel() is an
OpenAI model that supports structured outputs (e.g., gpt-4o) before calling
generateObject—validate or coerce the result of getModel() into a compatible
LanguageModel and throw/log a clear error if an incompatible model (like X.AI
grok) is returned.
In `@lib/agents/inquire.tsx`:
- Around line 8-37: The current implementation uses streamObject(...) and
iterates partialObjectStream to set finalInquiry, which only yields the terminal
object and hides partial state; replace streamObject with generateObject
(passing the same params: model (await getModel()) as LanguageModel, system
prompt, messages, schema: inquirySchema) and await its returned object directly,
then assign that result to finalInquiry and return it, removing the for-await
loop over partialObjectStream; ensure types for finalInquiry and the
generateObject call align with PartialInquiry.
In `@lib/agents/tools/geospatial.tsx`:
- Around line 275-283: The switch over queryType inside the IIFE can return
undefined for unknown query types; change the switch in the IIFE (the block that
currently calls prefer('directions_tool') / prefer('matrix_tool') / etc.) to
include a default case that throws a structured Error (or a custom Error
subclass) indicating the queryType is unsupported (include the queryType value
in the message), and then remove the downstream fallback that uses toolName ??
'unknown_tool' so we fail fast and surface a clear "unsupported queryType" error
instead of sending "unknown_tool" to the MCP server.
In `@lib/agents/writer.tsx`:
- Line 16: The default system prompt string containing the typo "There are also
some proconfigured example queires." should be updated: find the prompt literal
in lib/agents/writer.tsx that includes that exact phrase and replace
"proconfigured" with "preconfigured" and "queires" with "queries" so the prompt
reads "There are also some preconfigured example queries."
---
Outside diff comments:
In `@app/actions.tsx`:
- Around line 1-9: The file exports only the AIState type (referencing
AIMessage) and is unused and misnamed as a .tsx; either delete app/actions.tsx
or move the AIState type into the existing types module where AIMessage lives
(e.g., add export type AIState = { messages: AIMessage[]; chatId: string;
isSharePage?: boolean } to the types file and remove the redundant file), and
update any imports if you relocate the type.
In `@components/chat-panel.tsx`:
- Around line 86-105: handleSubmit currently clears selectedFile via
clearAttachment() and only appends the user text; modify handleSubmit to include
the attached file by base64-encoding selectedFile and passing it as fileData in
the append call body alongside mapProvider and drawnFeatures, and include
action: 'resolution_search' so the backend route (app/api/chat/stream) can
process the file; ensure you still clearAttachment() and setInput('') after the
append completes, and keep the submit enabled when only a file is attached by
using the existing selectedFile check.
In `@components/mobile-icons-bar.tsx`:
- Around line 42-44: handleNewChat currently only calls setMessages([]) which
leaves the user on the old /search/[oldId] so subsequent messages append to the
previous chat; update handleNewChat (remove the unnecessary async) to also
navigate away so a fresh chatId is created — either call router.push('/') after
clearing messages or generate a new id and call router.push(`/search/${newId}`)
(keep setMessages([]) as-is), ensuring you reference the handleNewChat function
and the Chat component's routing behavior when making the change.
In `@components/search-section.tsx`:
- Around line 14-39: Wrap the JSON.parse in SearchSection with a try/catch and
fall back to a safe undefined/parseError state instead of letting a thrown parse
error bubble out; after parsing, explicitly check for an error shape (e.g.,
searchResults?.error) and render an error branch (for example show a ToolBadge
or a Section with the error message) rather than attempting to use
searchResults.query, searchResults.images, or searchResults.results when they
may be undefined; update the JSX branches in SearchSection to use the parseError
/ error-object checks so SearchResultsImageSection and SearchResults only
receive valid arrays/strings.
In `@lib/agents/researcher.tsx`:
- Around line 134-143: The researcher function currently mutates the caller's
messages array by pushing the assistant turn and tool responses (see messages,
toolCalls, toolResponses, fullResponse), causing retries to compound context;
change researcher to avoid in-place mutation by building and returning the new
message segments (e.g., newSegments containing the assistant message and any
tool messages) instead of pushing into messages, update the return to include
those segments (alongside result, fullResponse, hasError, toolResponses), and
let the caller decide whether to append newSegments to messages (only after a
successful, non-empty fullResponse) or to reset messages before retries.
In `@lib/agents/tools/geospatial.tsx`:
- Around line 37-41: The console.log in the GeospatialTool environment check is
leaking secret prefixes by printing substrings of composioApiKey,
mapboxAccessToken, and composioUserId; change this to log only boolean presence
checks (e.g., !!composioApiKey, !!mapboxAccessToken, !!composioUserId) or
descriptive status strings like "present"/"missing" in the same console.log call
so no parts of secrets are emitted (locate the block referencing composioApiKey,
mapboxAccessToken, composioUserId and replace the substring masking with
presence indicators).
- Around line 50-57: Replace the dynamic require of the mapbox JSON and the
redundant try/catch with a static JSON import using the project's
resolveJsonModule support: remove the require('../../../mapbox_mcp_config.json')
call and the surrounding try/catch, statically import the JSON
(mapbox_mcp_config.json) at the top of the module, drop the unused local
variable config and/or mapboxMcpConfig if not used elsewhere, and use the
imported object directly where needed (referencing mapbox_mcp_config.json and
any code that previously referenced mapboxMcpConfig or config).
In `@lib/agents/tools/index.tsx`:
- Around line 12-24: The tools object in getTools is typed as any which loses
compile-time guarantees; replace tools: any with a stronger type such as the
SDK's ToolSet (from 'ai') or a union Record type using ReturnType for each
factory (e.g., Record<string, ReturnType<typeof searchTool> | ReturnType<typeof
retrieveTool> | ReturnType<typeof videoSearchTool> | ReturnType<typeof
geospatialTool>>) so streamText({ tools }) and other callers get proper type
checking; update the signature of getTools (and ToolProps if needed) to reflect
the chosen type.
In `@lib/agents/tools/retrieve.tsx`:
- Around line 9-35: The catch path is coercing the Error object into a string
and the fetch has no status or timeout handling; update the fetch call (where
you call fetch(`https://r.jina.ai/${url}`) and call response.json()) to use an
AbortController with a short timeout (e.g., 5s), check response.ok and return a
clear error when status is not ok (include response.status and statusText),
safely parse JSON with a try/catch and handle non-JSON bodies, and in the catch
block return the actual error.message (and error.cause if present) rather than
using `${error}` so the returned error includes the real message and URL.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 77b166e7-096f-4885-8ce2-664e41391f3c
📒 Files selected for processing (29)
app/actions.tsxapp/api/chat/stream/route.tsapp/page.tsxapp/search/[id]/page.tsxcomponents/chat-messages.tsxcomponents/chat-panel.tsxcomponents/chat-provider.tsxcomponents/chat.tsxcomponents/collapsible-message.tsxcomponents/copilot.tsxcomponents/followup-panel.tsxcomponents/header-search-button.tsxcomponents/message.tsxcomponents/mobile-icons-bar.tsxcomponents/resolution-carousel.tsxcomponents/search-related.tsxcomponents/search-section.tsxcomponents/tool-result-renderer.tsxcomponents/video-search-section.tsxlib/actions/suggest.tslib/agents/inquire.tsxlib/agents/query-suggestor.tsxlib/agents/researcher.tsxlib/agents/tools/geospatial.tsxlib/agents/tools/index.tsxlib/agents/tools/retrieve.tsxlib/agents/tools/search.tsxlib/agents/tools/video-search.tsxlib/agents/writer.tsx
💤 Files with no reviewable changes (1)
- lib/agents/query-suggestor.tsx
📜 Review details
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-10T08:45:03.719Z
Learning: The app/api/chats/route.ts endpoint correctly returns { chats: [] } for unauthenticated users instead of a 401 error. This is the intended behavior for better UX and should not be changed to return 401.
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: Main branch has database schema with vector embeddings and RPC functions for RAG but never uses them. The PR branch activates this dormant infrastructure by actually calling these functions to implement context-aware responses.
📚 Learning: 2026-02-08T16:50:33.791Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
Applied to files:
lib/agents/researcher.tsx
🔇 Additional comments (10)
components/message.tsx (1)
10-24: LGTMMigration from
StreamableValue<string>to plainstringis clean; LaTeX preprocessing remains correct and thecontent || ''guard safely handles any accidentalundefinedfrom callers.app/page.tsx (1)
1-17: LGTM — provider swap is clean and consistent withapp/search/[id]/page.tsx.lib/agents/tools/video-search.tsx (1)
3-28: LGTM — factory signature cleanup and error shape is consistent with other tools in this PR.Returning a plain
{ error: string }object mirrors the pattern inlib/agents/tools/search.tsxandretrieve.tsx, keeping the tool result rendering surface uniform forcomponents/tool-result-renderer.tsx.lib/agents/tools/search.tsx (1)
5-55: LGTM — consistent with the tool refactor pattern across the PR.Error shape (
{ error: string }) and removal ofToolPropsmatchvideo-search.tsxandretrieve.tsx, keepingcomponents/tool-result-renderer.tsxhappy.components/resolution-carousel.tsx (1)
46-54: Payload size limits are already configured adequately; base64 overhead is mitigated by client-side compression.The satellite image is base64-encoded and sent via
request.json()in/api/chat/stream, adding ~33% size overhead. However,compressImagereduces dimensions to 1024×1024 and converts to JPEG at 0.8 quality before transmission, which should keep payloads well below the 200 MBbodySizeLimitconfigured for server actions innext.config.mjs. Verify that hosting platforms and any intermediate proxies align with this limit if deploying outside the default environment, but the current compression strategy is sound.lib/agents/researcher.tsx (1)
13-75: System prompt updates look coherent; verify backtick escaping renders.The
${'search'},${'retrieve'},${'geospatialQueryTool'}interpolations inside aString.rawtemplate produce literal backticks in the rendered prompt. Confirm this is the intended surface shown to the model (vs. previous version that presumably used markdown fencing). Functionally neutral, just a readability/consistency note for the prompt content.components/chat-provider.tsx (2)
30-34:useChatContextLGTM.Clean throw-on-missing-provider pattern.
42-57: No action needed — The concern abouttoolInvocationsininitialMessagesdoes not apply to the current implementation.The codebase only persists messages with
roleandcontentto the database (schema and mapping in search/[id]/page.tsx confirm this).toolInvocationsare never stored or sent toinitialMessages. Additionally,useChatis not configured with tools, and metadata like annotations (inquiry,related) flows separately through the stream protocol, not through message fields. If tool-calling and replay are added in the future, proper persistence andsendExtraMessageFieldswould then be relevant.components/chat-messages.tsx (1)
78-85: No action required. Theinquiryprop shape passed to<Copilot>is correct—CopilotPropsexpectsinquiry: { value: PartialInquiry }, and the component receives exactly that at line 83.app/api/chat/stream/route.ts (1)
1-11: 🧹 Nitpick | 🔵 TrivialUnused imports.
streamTextandLanguageModel(line 1) andgetModel(line 8) are imported but never referenced in this module. Safe to drop.⛔ Skipped due to learnings
Learnt from: ngoiyaeric Repo: QueueLab/QCX PR: 0 File: :0-0 Timestamp: 2026-01-13T13:26:30.086Z Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.Learnt from: ngoiyaeric Repo: QueueLab/QCX PR: 0 File: :0-0 Timestamp: 2026-02-08T16:50:33.791Z Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
| ? { lat: parseFloat(latitude), lng: parseFloat(longitude) } | ||
| : undefined | ||
|
|
||
| const currentSystemPrompt = (await getSystemPrompt(userId)) || '' |
There was a problem hiding this comment.
RAG context is still not being passed to the researcher.
currentSystemPrompt is assigned directly from getSystemPrompt(userId) and passed to researcher on line 135, with no call to the RAG augmentation (generate_embedding / hybrid_search from lib/actions/rag.ts). The retrieved-context infrastructure remains dormant in this endpoint, replicating the exact same bug previously identified in app/actions.tsx (where currentSystemPrompt was passed instead of augmentedSystemPrompt). If RAG was intended to be active post-migration, augment the prompt here before invoking researcher.
Based on learnings: "there's a bug on line 433 of app/actions.tsx where currentSystemPrompt is passed to the researcher instead of augmentedSystemPrompt, preventing the retrieved context from being used" — the same pattern is repeated here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/api/chat/stream/route.ts` at line 41, currentSystemPrompt is passed
directly to researcher and never augmented with RAG context; call the RAG
augmentation flow (use the functions from lib/actions/rag.ts such as
generate_embedding and hybrid_search or your existing wrapper that returns
augmentedSystemPrompt) after fetching getSystemPrompt(userId) and before
invoking researcher, then pass augmentedSystemPrompt to researcher instead of
currentSystemPrompt (reference symbols: currentSystemPrompt, getSystemPrompt,
researcher, generate_embedding, hybrid_search, augmentedSystemPrompt).
| if (lastText === 'what is a planet computer?' || lastText === 'what is qcx-terra?') { | ||
| const definition = lastText === 'what is a planet computer?' | ||
| ? `A planet computer is a proprietary environment aware system that interoperates weather forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)` | ||
| : `QCX-Terra is a model garden of pixel level precision geospatial foundational models for efficient land feature predictions from satellite imagery. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`; | ||
|
|
||
| const encoder = new TextEncoder() | ||
| const stream = new ReadableStream({ | ||
| start(controller) { | ||
| // Send text content | ||
| controller.enqueue(encoder.encode(`0:${JSON.stringify(definition)}\n`)) | ||
| // Send finish message | ||
| controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`)) | ||
| // Send data with metadata | ||
| controller.enqueue(encoder.encode(`2:[{"relatedQueries":{"items":[]},"type":"related"}]\n`)) | ||
| controller.close() | ||
| } | ||
| }) | ||
|
|
||
| saveChatAsync(chatId, userId, messages, definition) | ||
|
|
||
| return new Response(stream, { | ||
| headers: { 'Content-Type': 'text/plain; charset=utf-8' } | ||
| }) | ||
| } |
There was a problem hiding this comment.
Hardcoded replies: content drift vs. system prompt, and fragile string matching.
- The Planet Computer definition returned here says "weather forecasting", while the system prompt source of truth in
lib/agents/researcher.tsx(line 73) says "Climate forecasting". Pick one. - Matching on exact lowercased strings (including a trailing
?) means "What is a planet computer" (no question mark) or an extra trailing space falls through to the task manager and gets a different, model-generated answer — a visible inconsistency. Consider normalizing more aggressively (strip punctuation/whitespace) or move these responses into the system prompt only. - On empty
messages,messages[messages.length - 1]isundefined— safe due to the optional chaining on line 68, but worth asserting at least one message before branching into the task-manager path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/api/chat/stream/route.ts` around lines 69 - 92, The hardcoded reply
branch uses fragile exact-match logic and inconsistent wording; update it to
normalize the last user message (trim, collapse whitespace, remove trailing
punctuation and lowercase) before matching instead of comparing raw lastText,
and choose wording consistent with the system prompt (use "Climate forecasting"
everywhere) when building the definition variable; also guard the branch by
asserting messages.length > 0 (or early-return) before accessing
messages[messages.length - 1], and keep the call to saveChatAsync(chatId,
userId, messages, definition) after these checks; alternatively consider moving
these canned replies into the system prompt in lib/agents/researcher.tsx and
only use this branch as a fallback.
| let finalSuggestions: PartialRelated = {} | ||
|
|
||
| const result = await streamObject({ | ||
| model: (await getModel()) as LanguageModel, | ||
| system: systemPrompt, | ||
| messages: [{ role: 'user', content: query }], | ||
| schema: relatedSchema | ||
| }) | ||
|
|
||
| for await (const obj of result.partialObjectStream) { | ||
| if (obj && typeof obj === 'object' && 'items' in obj) { | ||
| finalSuggestions = obj as PartialRelated | ||
| } | ||
| objectStream.done() | ||
| })() | ||
| } | ||
|
|
||
| return objectStream.value | ||
| return finalSuggestions |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
ai SDK 4.3 generateObject vs streamObject usage for final-value-only consumers
💡 Result:
For final-value-only consumers in Vercel AI SDK 4.3 (part of v4 series), use generateObject. It awaits completion and returns the full validated structured object via { object }, ideal when you only need the end result without streaming partial updates. streamObject is for streaming scenarios: it provides partialObjectStream for progressive updates, elementStream (array mode), and a final object promise, but requires handling the stream even if just awaiting the end. generateObject throws AI_NoObjectGeneratedError on failure; streamObject handles errors in-stream via onError. Note: These are deprecated in later versions (v5+) favoring generateText({ output: Output.object }) / streamText, but available in 4.3. No tool calling support with generateObject/streamObject; use text variants.
Citations:
- 1: https://v4.ai-sdk.dev/docs/ai-sdk-core/generating-structured-data
- 2: https://v4.ai-sdk.dev/docs/reference/ai-sdk-core/stream-object
- 3: https://ai-sdk.dev/v4/docs/reference/ai-sdk-core/stream-object
- 4: https://ai-sdk.dev/v4/docs/reference/ai-sdk-core/generate-object
- 5: https://ai-sdk.dev/v4/docs/ai-sdk-core/generating-structured-data
🏁 Script executed:
# Find the getModel() function definition
fd -t f -e ts -e js | xargs rg -l "getModel" | head -20Repository: QueueLab/QCX
Length of output: 147
🏁 Script executed:
# Search for getModel function definition
rg -n "^\s*(export\s+)?(async\s+)?function\s+getModel|^\s*const\s+getModel\s*=" --type ts --type jsRepository: QueueLab/QCX
Length of output: 143
🏁 Script executed:
# Check suggest.ts and related model configuration
fd -t f "suggest.ts" | xargs cat -nRepository: QueueLab/QCX
Length of output: 2052
🏁 Script executed:
# Read the getModel function implementation
cat -n lib/utils/index.ts | head -60Repository: QueueLab/QCX
Length of output: 2644
🏁 Script executed:
# Read more of the getModel function to see all model options
cat -n lib/utils/index.ts | sed -n '24,100p'Repository: QueueLab/QCX
Length of output: 3331
🏁 Script executed:
# Check git history or PR context for suggest.ts
git log --oneline -n 20 -- lib/actions/suggest.ts 2>/dev/null || echo "Git history unavailable"Repository: QueueLab/QCX
Length of output: 117
Use generateObject instead of streamObject—but require OpenAI gpt-4o for structured output compatibility.
The signature returns Promise<PartialRelated>, and the loop at lines 35–39 discards every partial except the last. generateObject is the correct primitive for final-value-only consumers and eliminates wasted parsing overhead.
Critical: X.AI's grok-4-fast-non-reasoning model does not support JSON Schema structured outputs and will reject this call. Since getModel() can return X.AI by default or user selection, you must either:
- Preferred: Update the
getModel()call to enforce OpenAI:const model = (await getModel()) as LanguageModel; /* ensure gpt-4o or compatible */, or - Explicitly pass OpenAI: replace
getModel()with OpenAIgpt-4oinitialization.
Ensure model selection supports structured outputs before deploying.
♻️ Proposed refactor
-import { CoreMessage, LanguageModel, streamObject } from 'ai'
+import { CoreMessage, LanguageModel, generateObject } from 'ai'
@@
- let finalSuggestions: PartialRelated = {}
-
- const result = await streamObject({
- model: (await getModel()) as LanguageModel,
- system: systemPrompt,
- messages: [{ role: 'user', content: query }],
- schema: relatedSchema
- })
-
- for await (const obj of result.partialObjectStream) {
- if (obj && typeof obj === 'object' && 'items' in obj) {
- finalSuggestions = obj as PartialRelated
- }
- }
-
- return finalSuggestions
+ const { object } = await generateObject({
+ model: (await getModel()) as LanguageModel,
+ system: systemPrompt,
+ messages: [{ role: 'user', content: query }],
+ schema: relatedSchema
+ })
+
+ return object🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/actions/suggest.ts` around lines 26 - 41, Replace the streaming usage
with the final-value primitive: stop calling streamObject and use generateObject
to return a single Promise<PartialRelated> (use generateObject with
systemPrompt, messages [{ role: 'user', content: query }], and relatedSchema)
and remove the for-await loop that only kept the last partial; also ensure the
model passed from getModel() is an OpenAI model that supports structured outputs
(e.g., gpt-4o) before calling generateObject—validate or coerce the result of
getModel() into a compatible LanguageModel and throw/log a clear error if an
incompatible model (like X.AI grok) is returned.
|
@coderabbitai there are no tokens being generated and not multi modal features |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 14333
I've noted this for future reviews. I'll avoid flagging token tracking or multimodal message handling concerns for this codebase since those features are intentionally out of scope. ✏️ Learnings added
|
|
@coderabbitai # Vercel Runtime Log RequestID: rwcvl-1777104552049-1cd52d48d85c LifecycleMiddlewareStatus: 200 FunctionStatus: 401 DeploymentID: dpl_EDRgZ3E2Ks5RXoET5CPcPaaJDwaw |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chainLet me inspect the new stream route and any auth utilities it references to diagnose this 401. 🏁 Script executed: Length of output: 1051 🏁 Script executed: Length of output: 17912 🏁 Script executed: Length of output: 4876
Root CauseThe route handler calls Three likely causes, in order of probability: 1. 🔴 Supabase env vars not set in the preview environment
if (!supabaseUrl || !supabaseAnonKey) {
console.error('[Auth] Supabase URL or Anon Key is not set for server-side auth.');
return { user: null, session: null, error: ... };
}Fix: In Vercel → Project Settings → Environment Variables, ensure 2. 🟡
|
Stream protocol: add x-vercel-ai-data-stream header, use 2: data frames instead of 8: annotation frames, add d: finish-message after every e: finish-step, add Content-Type on error responses. Security: stop logging API key prefixes in geospatial.tsx, add default case to queryType switch. Reliability: bound researcher retry loop (max 3 attempts), return newSegments instead of mutating caller messages, add AbortController timeout and response.ok check in retrieve.tsx, wrap JSON.parse in try/catch in search-section/video-search-section/tool-result-renderer. Components: memoize ChatProvider context value, type-safe annotation filtering with type guard, fix imports (ChatRequestOptions from ai not ai/react), stable keys in chat-messages, fix isLastMessage tracking, FileReader onerror/onabort handlers in resolution-carousel and header-search-button, provider-aware disabled check, router.push for new chat navigation, skip sentinel via body metadata in copilot, loading guard and error handling in followup-panel, mapProvider in search-related body, remove isCollapsed from collapsible-message deps, router.refresh only on loading transition in chat.tsx. Data: strip base64 images before DB persistence in saveChatAsync, preserve message IDs for stable deduplication, filter to valid roles in initialMessages rehydration, use generateObject instead of streamObject in suggest.ts and inquire.tsx, send file attachments as base64 in chat-panel, delete unused app/actions.tsx. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
@coderabbitai there are no tokens generating |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 33137
🔴 Root Cause 1 — Model API key not set in the preview environment
Fix: In Vercel → Project Settings → Environment Variables, make sure all model API keys are scoped to Preview (not just Production). 🟡 Root Cause 2 — Writer fallback is gated behind
|
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/agents/tools/geospatial.tsx (1)
270-300:⚠️ Potential issue | 🟡 MinorResidual
'unknown_tool'fallback whenprefer()returns undefined.The new
defaultbranch handles unsupportedqueryType, but for'directions','map','reverse', and'geocode'the IIFE still returnsprefer(...)with no string fallback (only'search'has|| 'poi_search_tool'). If the MCP server doesn't exposedirections_tool/static_map_image_tool/reverse_geocode_tool/forward_geocode_toolunder those exact names,toolNameisundefinedand Line 300 sendsname: 'unknown_tool'— producing the same confusing "tool not found" error the prior review aimed to eliminate. Prefer failing fast with a structured message and dropping the?? 'unknown_tool'fallback.🛡️ Suggested fix
const toolName = await (async () => { const { tools } = await mcpClient.listTools().catch(() => ({ tools: [] })); const names = new Set(tools?.map((t: any) => t.name) || []); const prefer = (...cands: string[]) => cands.find(n => names.has(n)); + let resolved: string | undefined switch (queryType) { - case 'directions': return prefer('directions_tool') - case 'distance': return prefer('matrix_tool'); - case 'search': return prefer( 'isochrone_tool','category_search_tool') || 'poi_search_tool'; - case 'map': return prefer('static_map_image_tool') - case 'reverse': return prefer('reverse_geocode_tool'); - case 'geocode': return prefer('forward_geocode_tool'); + case 'directions': resolved = prefer('directions_tool'); break; + case 'distance': resolved = prefer('matrix_tool'); break; + case 'search': resolved = prefer('isochrone_tool', 'category_search_tool', 'poi_search_tool'); break; + case 'map': resolved = prefer('static_map_image_tool'); break; + case 'reverse': resolved = prefer('reverse_geocode_tool'); break; + case 'geocode': resolved = prefer('forward_geocode_tool'); break; default: throw new Error(`Unsupported queryType: ${queryType}`); } + if (!resolved) { + throw new Error(`No MCP tool available on server for queryType "${queryType}"`); + } + return resolved })();- mcpClient.callTool({ name: toolName ?? 'unknown_tool', arguments: toolArgs }), + mcpClient.callTool({ name: toolName, arguments: toolArgs }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/agents/tools/geospatial.tsx` around lines 270 - 300, The code currently falls back to name: 'unknown_tool' when toolName is undefined, masking missing MCP tool mappings; update the toolName resolution (the IIFE that calls prefer(...)) to detect when prefer(...) returns undefined and immediately throw a clear, structured Error (including queryType and the attempted tool identifiers) instead of returning undefined, and remove the "?? 'unknown_tool'" fallback from the mcpClient.callTool call so mcpClient.callTool({ name: toolName, arguments: toolArgs }) is only invoked with a validated toolName; reference prefer(), toolName, and mcpClient.callTool to locate the changes.
♻️ Duplicate comments (3)
components/search-related.tsx (1)
20-29:⚠️ Potential issue | 🟡 MinorRelated-query body still missing
drawnFeatures(and other geospatial context).Only
mapProviderwas added; the route handler also readsdrawnFeatures,timezone, andlatitude/longitudefrom the request body (app/api/chat/stream/route.ts:34-49). Related-query follow-ups will run without the user-drawn map context that typed messages fromChatPanel.handleSubmitcarry, producing degraded geospatial routing.♻️ Proposed fix
-import { useSettingsStore } from '@/lib/store/settings' +import { useSettingsStore } from '@/lib/store/settings' +import { useMapData } from '@/components/map/map-data-context' @@ const { append } = useChatContext() const { mapProvider } = useSettingsStore() + const { mapData } = useMapData() @@ const handleRelatedClick = async (query: string) => { await append( { role: 'user', content: query }, { body: { mapProvider, + drawnFeatures: mapData.drawnFeatures || [], + timezone: mapData.currentTimezone || 'UTC', } } ) }Alternative: centralize the default body in
ChatProviderso everyappend()inherits the same geospatial context.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/search-related.tsx` around lines 20 - 29, The related-query handler handleRelatedClick currently only sends mapProvider in the append() body; include the same geospatial context fields that the route expects (drawnFeatures, timezone, latitude, longitude) so follow-ups carry user-drawn map context—either add those properties to the body passed in handleRelatedClick or centralize a default request body in ChatProvider so all append() calls (including ChatPanel.handleSubmit and handleRelatedClick) inherit drawnFeatures/timezone/latitude/longitude along with mapProvider.components/header-search-button.tsx (1)
128-130:⚠️ Potential issue | 🟡 MinorDuplicate base64 conversion still present.
When a single provider is active,
fileDatareads the chosen blob, and thenmapboxImageData/googleImageDatareads the same blob again — doubling the FileReader work and roughly doubling the in-memory base64 strings before serialization. Reuse the already-computed value:♻️ Proposed fix
- const fileData = await blobToBase64((mapboxBlob || googleBlob)!) - const mapboxImageData = mapboxBlob ? await blobToBase64(mapboxBlob) : undefined - const googleImageData = googleBlob ? await blobToBase64(googleBlob) : undefined + const mapboxImageData = mapboxBlob ? await blobToBase64(mapboxBlob) : undefined + const googleImageData = googleBlob ? await blobToBase64(googleBlob) : undefined + const fileData = (mapboxImageData ?? googleImageData)!🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/header-search-button.tsx` around lines 128 - 130, The code converts the same blob to base64 multiple times; change the order to compute mapboxImageData = mapboxBlob ? await blobToBase64(mapboxBlob) : undefined and googleImageData = googleBlob ? await blobToBase64(googleBlob) : undefined once, then set fileData = mapboxImageData ?? googleImageData (or otherwise reuse the already-computed base64) so that blobToBase64 is not called twice for the same blob; update references to fileData, mapboxImageData, googleImageData and keep using blobToBase64, mapboxBlob and googleBlob identifiers to locate the change.components/followup-panel.tsx (1)
17-37:⚠️ Potential issue | 🟡 MinorInput is cleared before the awaited
append, so failures silently drop the user's text.
setInput('')runs on Line 22 beforeawait append(...). Ifappendrejects (network error, 5xx, aborted stream), the only feedback is aconsole.errorand the input field is already empty — the user must retype. Restore the input on failure (or clear only after success), and consider surfacing the error in the UI.♻️ Suggested fix
const handleSubmit = async (event: React.FormEvent<HTMLFormElement>) => { event.preventDefault() if (!input.trim() || isLoading) return const content = input.trim() - setInput('') - try { await append( { role: 'user', content }, { body: { mapProvider, drawnFeatures: mapData.drawnFeatures || [], } } ) + setInput('') } catch (error) { console.error('Failed to send follow-up:', error) + // keep `input` populated so the user can retry } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@components/followup-panel.tsx` around lines 17 - 37, The handler clears the input before awaiting append, causing user text to be lost on failure; in handleSubmit, keep the trimmed content in a local variable (content) and either move setInput('') to after the awaited append succeeds or restore the input in the catch by calling setInput(content); also replace the console.error with a UI-facing error state (e.g., setError) or show a toast so users see failures—update references to handleSubmit, append, setInput, and mapData.drawnFeatures accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/api/chat/stream/route.ts`:
- Around line 320-374: The synthetic rows pushed into aiMessages (see the
toolOutputs loop, the block adding answer, relatedQueries, and the final 'end'
marker) use fresh nanoid() each save, causing duplicate rows; change those id
generations to deterministic IDs derived from the chat context (e.g., use chatId
+ turnIndex + role/type like `${chatId}:${turnIndex}:response`) or only persist
the current turn's delta instead of re-writing all messages so
onConflictDoUpdate(id) will match existing rows; update the id assignment for
the tool output block (tool.toolName), the assistant answer block, the
related-queries block, and the end marker to use that deterministic scheme.
- Around line 75-98: The hardcoded reply branch using lastText exact matches
(variable lastText) is fragile and bypasses taskManager/RAG/tool persistence
while saving only the raw definition via saveChatAsync; fix by normalizing the
input (e.g., strip punctuation and whitespace and lowercase lastText) or better
yet move these canned definitions into the system prompt/source-of-truth, and
ensure the branch funnels responses through the same pipeline as other chats:
invoke taskManager or the normal response composer so RAG augmentation and tool
outputs are applied, and when persisting use the same save path (e.g.,
saveChatAsync) but include the full response metadata (related queries/tool
outputs) rather than only the raw definition so rehydrated history contains
relatedQueries and usage fields; locate changes around lastMsg/lastText
handling, the conditional block that constructs definition and the call to
saveChatAsync to implement this.
- Around line 217-298: The stream's start callback in handleResolutionSearch can
throw without reaching the outer catch and leaves useChat stuck; wrap the entire
start(controller) body in a try/catch and on error call controller.error(err) or
enqueue synthetic finish frames (e: and d: with finishReason:"error") before
closing so the client receives a terminal event; also stop doing fire-and-forget
saveChatAsync(...) — schedule it via after(() => saveChatAsync(chatId, userId,
messages, analysisResult.summary || '')) (or the equivalent after wrapper used
elsewhere) so the save runs safely in the serverless background.
- Around line 53-64: The current in-place splice and aggressive filter/map
remove tool turns and mutate messages used later for title heuristics: instead
of mutating messages with messages.splice(...) create a trimmed copy (e.g.,
trimmedMessages = messages.slice(-maxMsgs)) so the original array (and original
first user message) remains intact; when building messages preserve tool turns
and toolInvocations by including role === 'tool' and copying m.toolInvocations
in the map that produces CoreMessage (rather than dropping those fields);
finally pass the preserved originalFirstUserMessage (capture messages[0] before
trimming) into saveChatAsync for title computation so the title heuristic uses
the original user prompt.
- Around line 168-176: relatedQueries is left as {} when errorOccurred and then
emitted unconditionally, wasting bandwidth; change logic so relatedQueries is
only set and streamed when querySuggestor returned non-empty results.
Specifically: keep using the sanitizedMessages -> querySuggestor(...) call but
set relatedQueries to undefined/null by default, and when emitting the
annotation that currently sends { type: 'related', relatedQueries } only do so
if !errorOccurred && relatedQueries?.items?.length > 0; update the emission code
that writes the { type: 'related', relatedQueries } annotation to be conditional
on that check.
In `@components/chat-messages.tsx`:
- Line 18: The variable isLoading is currently unused after destructuring from
useChatContext(); either remove it from the destructuring to eliminate the
unused variable, or thread it into the message rendering so loading state can be
shown — e.g., pass isLoading as a prop into CollapsibleMessage (or the assistant
skeleton component used to render assistant messages) and use it to show the
streaming/loading indicator there; update the useChatContext() destructure in
components/chat-messages.tsx and the CollapsibleMessage/assistant component
props and handling accordingly.
- Around line 31-40: Tool result annotations are being rendered globally from
annotations (useChat().data) causing old tool outputs to accumulate; either have
the backend include a per-message identifier (e.g., messageId or turnIndex on
tool_result) and then filter annotations by that id before pushing to
renderedMessages, or stop rendering all tool_result entries up-front and instead
render tool outputs inside the per-message loop (use message.toolInvocations or
the current message object) so you only render tool results that belong to the
current assistant message; update the rendering logic around the annotations
variable and the ToolResultRenderer invocations (and adjust where
renderedMessages is populated) to use the per-message identifier or
message.toolInvocations to scope tool results.
In `@components/chat-panel.tsx`:
- Around line 99-128: The FileReader promise in handleSubmit (creation of
FileReader, reader.onloadend/reader.onerror and awaiting new Promise) must be
wrapped in try/catch so reader.onerror rejections are handled; on error,
log/report a user-visible error (e.g., set an error state or show a toast),
avoid calling setInput('')/clearAttachment() or restore them if already cleared,
and return early so append(...) is not called; update the block around
FileReader/reader.onerror and the subsequent setInput/clearAttachment calls to
only run after a successful read and ensure append(...) is only invoked when
fileData is available or when no file, keeping existing append(...) usage
intact.
In `@components/chat-provider.tsx`:
- Line 22: Update the handleSubmit signature in the chat provider interface to
use the proper ChatRequestOptions type instead of any: change handleSubmit: (e?:
React.FormEvent<HTMLFormElement>, options?: any) => void to accept options?:
ChatRequestOptions; ensure you import or reference ChatRequestOptions from the
SDK/module where other request types come from (matching existing imports),
update any related usages/calls to satisfy the stronger type, and run typechecks
to fix any mismatches.
- Around line 47-67: The ChatProvider currently ignores useChat's error and
control callbacks so 401s and stream failures are swallowed; update ChatProvider
to extract error, reload and stop (and optionally onError/onResponse handlers)
from useChat and include them in the memoized context value and ChatContextValue
type so consumers (e.g., ChatMessages/ChatPanel) can show a toast/banner on
error and provide retry/stop controls; ensure the memoized value returned by
ChatProvider contains error, reload, and stop alongside
messages/input/append/handleSubmit/isLoading/data.
In `@components/video-search-section.tsx`:
- Line 28: The render currently accesses searchResults.searchParameters.q
directly which can throw if searchParameters is missing; update the usage inside
ToolBadge (component: ToolBadge) to use optional chaining and a safe fallback
(e.g., searchResults?.searchParameters?.q ?? '' or a placeholder like '—') so
rendering remains robust against partial/malformed payloads while keeping the
rest of the JSX unchanged.
In `@lib/agents/researcher.tsx`:
- Around line 134-145: newSegments can include tool-calls without matching
tool-results when a stream errors after a 'tool-call' but before its
'tool-result', and also can create an assistant message with empty text when
fullResponse is empty; fix by filtering toolCalls to only include those that
have a corresponding toolResult (match by the ToolCallPart/ToolResultPart
identifier — verify and use toolCallId if present, otherwise toolName) before
building newSegments, and skip adding an assistant text part when fullResponse
is empty; update the logic around the variables newSegments, toolCalls,
toolResponses, ToolCallPart, ToolResultPart, fullResponse, and the errorOccurred
handling so only paired tool-call/tool-result entries are appended to message
history.
In `@lib/agents/writer.tsx`:
- Around line 22-35: Replace the nonexperimental_streamText usage and manual
for-await reassembly with generateText: call generateText({ model: await
getModel() as LanguageModel, maxTokens: 2500, system: systemToUse, messages })
await its result and return the returned full text directly instead of looping
over result.textStream and concatenating into fullResponse; remove the
fullResponse variable and the for-await loop around nonexperimental_streamText.
---
Outside diff comments:
In `@lib/agents/tools/geospatial.tsx`:
- Around line 270-300: The code currently falls back to name: 'unknown_tool'
when toolName is undefined, masking missing MCP tool mappings; update the
toolName resolution (the IIFE that calls prefer(...)) to detect when prefer(...)
returns undefined and immediately throw a clear, structured Error (including
queryType and the attempted tool identifiers) instead of returning undefined,
and remove the "?? 'unknown_tool'" fallback from the mcpClient.callTool call so
mcpClient.callTool({ name: toolName, arguments: toolArgs }) is only invoked with
a validated toolName; reference prefer(), toolName, and mcpClient.callTool to
locate the changes.
---
Duplicate comments:
In `@components/followup-panel.tsx`:
- Around line 17-37: The handler clears the input before awaiting append,
causing user text to be lost on failure; in handleSubmit, keep the trimmed
content in a local variable (content) and either move setInput('') to after the
awaited append succeeds or restore the input in the catch by calling
setInput(content); also replace the console.error with a UI-facing error state
(e.g., setError) or show a toast so users see failures—update references to
handleSubmit, append, setInput, and mapData.drawnFeatures accordingly.
In `@components/header-search-button.tsx`:
- Around line 128-130: The code converts the same blob to base64 multiple times;
change the order to compute mapboxImageData = mapboxBlob ? await
blobToBase64(mapboxBlob) : undefined and googleImageData = googleBlob ? await
blobToBase64(googleBlob) : undefined once, then set fileData = mapboxImageData
?? googleImageData (or otherwise reuse the already-computed base64) so that
blobToBase64 is not called twice for the same blob; update references to
fileData, mapboxImageData, googleImageData and keep using blobToBase64,
mapboxBlob and googleBlob identifiers to locate the change.
In `@components/search-related.tsx`:
- Around line 20-29: The related-query handler handleRelatedClick currently only
sends mapProvider in the append() body; include the same geospatial context
fields that the route expects (drawnFeatures, timezone, latitude, longitude) so
follow-ups carry user-drawn map context—either add those properties to the body
passed in handleRelatedClick or centralize a default request body in
ChatProvider so all append() calls (including ChatPanel.handleSubmit and
handleRelatedClick) inherit drawnFeatures/timezone/latitude/longitude along with
mapProvider.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: cb5fc12d-93ec-4e6d-b2e3-a017d730650a
📒 Files selected for processing (23)
app/actions.tsxapp/api/chat/stream/route.tsapp/search/[id]/page.tsxcomponents/chat-messages.tsxcomponents/chat-panel.tsxcomponents/chat-provider.tsxcomponents/chat.tsxcomponents/collapsible-message.tsxcomponents/copilot.tsxcomponents/followup-panel.tsxcomponents/header-search-button.tsxcomponents/mobile-icons-bar.tsxcomponents/resolution-carousel.tsxcomponents/search-related.tsxcomponents/search-section.tsxcomponents/tool-result-renderer.tsxcomponents/video-search-section.tsxlib/actions/suggest.tslib/agents/inquire.tsxlib/agents/researcher.tsxlib/agents/tools/geospatial.tsxlib/agents/tools/retrieve.tsxlib/agents/writer.tsx
💤 Files with no reviewable changes (1)
- app/actions.tsx
📜 Review details
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-04-25T08:10:37.659Z
Learning: In the QCX repository (feat/ai-sdk-ui-migration branch and going forward), there is no token generation/counting tracking and no multimodal LLM features. Base64 image data sent via `append()` body fields in components like `header-search-button.tsx` and `resolution-carousel.tsx` is for server-side `resolution_search` processing only — it is NOT passed as multimodal message content parts to a vision model. Avoid flagging token tracking or multimodal LLM handling gaps as review concerns in this codebase.
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
📚 Learning: 2026-02-08T16:50:33.791Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-08T16:50:33.791Z
Learning: The PR branch implements RAG (Retrieval-Augmented Generation) via `lib/actions/rag.ts` which uses Supabase RPC functions `generate_embedding()` and `hybrid_search()` to retrieve relevant context from past conversations. However, there's a bug on line 433 of `app/actions.tsx` where `currentSystemPrompt` is passed to the researcher instead of `augmentedSystemPrompt`, preventing the retrieved context from being used.
Applied to files:
lib/agents/inquire.tsxlib/agents/writer.tsxcomponents/search-related.tsxcomponents/tool-result-renderer.tsxcomponents/followup-panel.tsxlib/actions/suggest.tscomponents/copilot.tsxapp/search/[id]/page.tsxlib/agents/researcher.tsxcomponents/chat-messages.tsxcomponents/chat-panel.tsxapp/api/chat/stream/route.ts
📚 Learning: 2026-04-25T08:10:37.659Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-04-25T08:10:37.659Z
Learning: In the QCX repository (feat/ai-sdk-ui-migration branch and going forward), there is no token generation/counting tracking and no multimodal LLM features. Base64 image data sent via `append()` body fields in components like `header-search-button.tsx` and `resolution-carousel.tsx` is for server-side `resolution_search` processing only — it is NOT passed as multimodal message content parts to a vision model. Avoid flagging token tracking or multimodal LLM handling gaps as review concerns in this codebase.
Applied to files:
components/resolution-carousel.tsxcomponents/search-related.tsxcomponents/followup-panel.tsxcomponents/mobile-icons-bar.tsxlib/actions/suggest.tscomponents/chat-provider.tsxcomponents/header-search-button.tsxcomponents/copilot.tsxapp/search/[id]/page.tsxcomponents/chat-messages.tsxcomponents/chat-panel.tsxlib/agents/tools/geospatial.tsxapp/api/chat/stream/route.ts
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
components/tool-result-renderer.tsxlib/actions/suggest.ts
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.
Applied to files:
lib/actions/suggest.ts
📚 Learning: 2026-01-16T10:02:40.220Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.
Applied to files:
components/chat-messages.tsx
📚 Learning: 2026-02-10T08:45:03.719Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-10T08:45:03.719Z
Learning: The app/api/chats/route.ts endpoint correctly returns { chats: [] } for unauthenticated users instead of a 401 error. This is the intended behavior for better UX and should not be changed to return 401.
Applied to files:
components/chat-panel.tsxapp/api/chat/stream/route.ts
🔇 Additional comments (18)
lib/agents/inquire.tsx (1)
1-30: LGTM — generateObject migration is clean.The switch to
generateObjectmatches the prior recommendation and the call site inapp/api/chat/stream/route.ts:111-128correctly consumes the returned object. Schema-based enforcement makes the embedded JSON shape in the system prompt informational rather than load-bearing — fine to keep for clarity.lib/actions/suggest.ts (1)
1-34: LGTM — generateObject conversion looks correct.
getSuggestionscleanly returns the validated object and the call site incomponents/chat-panel.tsx:145-150matches the newPromise<PartialRelated>signature. The X.AI structured-output compatibility concern previously raised still applies togetModel()selection but is not a new issue.components/mobile-icons-bar.tsx (1)
31-47: LGTM — clean migration to chat context.
setMessages([])followed byrouter.push('/')correctly replaces the removeduseActions().clearChat()flow and matches the pattern used elsewhere in the PR.components/resolution-carousel.tsx (1)
40-56: LGTM — FileReader error handling addressed.
onerror/onabortrejecters resolve the prior hang-on-failure concern; theappend()body shape matches the server route'sresolution_searchhandler inapp/api/chat/stream/route.ts:67-71.components/chat.tsx (1)
70-80: Refresh-trigger transition guard correctly addresses prior feedback.
prevIsLoadingRefis initialized tofalse, so on initial mount of/search/[id]with persisted assistant-ended history (isLoading === false) the effect short-circuits, androuter.refresh()only fires on a true→false transition. Behavior matches the previously requested guard.components/collapsible-message.tsx (1)
28-37: Sync effect simplification matches prior feedback.
isCollapsedis now derived synchronously and the effect depends only onisLastMessage, so user-initiated toggles are no longer overridden whenisCollapsedflips. The early-return for the non-collapsed case (Line 35) preserves the original render path.components/copilot.tsx (1)
61-77: Skip flow now usesbodymetadata as previously recommended.The
{"action":"skip"}sentinel no longer appears in user-visible message content; insteadbody: { action: 'skip' }is sent as metadata and the message content is the user's actual query (or a readable'Skipped'fallback). This matches the existingbody-based pattern used byfollowup-panel.tsxand avoids leaking the sentinel into UI/persisted history.components/search-section.tsx (1)
14-54: Synchronous parse with safe fallback looks correct.The try/catch around
JSON.parse, the'error' in parsednarrowing, and the skeleton fallback whenparsedis undefined cleanly replace the prior streaming control flow. The duplicate error branch (also handled upstream inToolResultRenderer) is reasonable defense-in-depth.lib/agents/tools/retrieve.tsx (1)
4-48: Timeout + structured error return aligns with the new client-side rendering contract.The 15s
AbortController, response-OK guard, and{ error: ... }payloads on every failure path produce shapes thatToolResultRenderer'sresult.errorshort-circuit andRetrieveSection'sdata.resultsconsumer both handle correctly.clearTimeoutinfinallycovers the success, non-OK, and throw paths.components/tool-result-renderer.tsx (1)
14-50: Error short-circuit addresses prior crash risk.
result.erroris now handled before any switch dispatch, soSearchSection/VideoSearchSection/RetrieveSectionno longer receive{ error: string }payloads they can't render. The geospatial branch uses optional chaining formcp_response/locationand gates onresult.type === 'MAP_QUERY_TRIGGER', which is consistent with the new geospatial tool's success shape.components/chat-panel.tsx (1)
19-31:messagesprop is redundant withuseChatContext().
ChatPanelreadsmessagesvia context downstream and the same context already exposesmessages. The prop creates two sources of truth. Drop the prop and readmessagesfromuseChatContext()directly.app/search/[id]/page.tsx (1)
41-49: Rehydration drops annotations/tool invocations — past chats lose tool results, inquiry, related queries.The DB schema (
lib/db/schema.ts:52-61) and theMessage[]constructed here only carryid/role/content/createdAt. Tool outputs and theinquiry/relatedannotations emitted by/api/chat/stream(route.ts lines 119, 190, 200, 269) live solely on the in-memorydata/annotations stream; reopening a chat via/search/[id]will never re-populateuseChatContext().annotations, soToolResultRenderer,Copilot, andSearchRelatedwill be empty for historical sessions. Persist annotations (e.g., ajsonbcolumn) and rehydrate them, otherwise this regresses the prior RSC behavior where rendered tool UI was reconstructed fromAIState.Also note:
'system'is included invalidRoles, butcomponents/chat-messages.tsxonly rendersuserandassistant— system rows will be silently dropped on display while still being sent back to the route as part of themessagespayload. Either filter touser/assistanthere, or render system explicitly.components/chat-messages.tsx (1)
32-32: DuplicatesgetToolResultsprovider logic.
useChatContext().getToolResultsalready filterstool_resultannotations. Re-filtering here splits the abstraction. Consume the provider helper (or drop the helper from the provider) so there is one place that knows the annotation shape.app/api/chat/stream/route.ts (4)
49-49: RAG augmentation still missing — same bug carried over fromapp/actions.tsx.
currentSystemPromptis passed straight toresearcherat line 146 with no call togenerate_embedding/hybrid_searchfromlib/actions/rag.ts. The retrieved-context infrastructure stays dormant in the new endpoint, exactly mirroring the previously-reported defect onapp/actions.tsx:433(wherecurrentSystemPromptwas sent instead ofaugmentedSystemPrompt). If RAG is intended for production after the migration, augment the prompt here before invokingresearcher.Based on learnings: "there's a bug on line 433 of
app/actions.tsxwherecurrentSystemPromptis passed to the researcher instead ofaugmentedSystemPrompt, preventing the retrieved context from being used." The same pattern is repeated here.
93-93: Fire-and-forgetsaveChatAsyncwill be killed mid-write on Vercel serverless.All three branches dispatch
saveChatAsync(...)withoutawaitimmediately before returning the streamingResponse. On Vercel's Node serverless runtime the function instance can be frozen as soon as the response body is consumed by the platform, dropping the in-flight DB write and producing intermittent missing chat persistence — particularly for fast streams (the hardcoded path) and the resolution-search path where the response is small. Wrap with Next.jsafter()so the runtime keeps the function alive until the save completes.-import { CoreMessage, ToolResultPart, streamText, LanguageModel } from 'ai' +import { CoreMessage, ToolResultPart, streamText, LanguageModel } from 'ai' +import { after } from 'next/server' @@ - saveChatAsync(chatId, userId, messages, definition) + after(() => saveChatAsync(chatId, userId, messages, definition)) @@ - saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries) + after(() => saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries)) @@ - saveChatAsync(chatId, userId, messages, analysisResult.summary || '') + after(() => saveChatAsync(chatId, userId, messages, analysisResult.summary || ''))Also applies to: 210-210, 286-286
15-18: LGTM — protocol contract is now correct.
x-vercel-ai-data-stream: v1is set on every streamedResponseviastreamHeaders, the 401 hasContent-Type: application/json, all stream paths emit bothe:(finish-step) andd:(finish-message) beforecontroller.close(), and annotation frames use the2:[...]data-part prefix that the provider'sdatafield consumes. These changes resolve the previously-flagged blockers arounduseChathanging inisLoadingand missing annotations.Also applies to: 23-26
145-160: No issues found. Theresearcherfunction properly returnsnewSegmentsas a typed, initialized array that is never undefined, and does not mutate the passedmessagesarray.components/video-search-section.tsx (1)
14-21: LGTM — JSON.parse is now guarded.The previous concern about an unguarded
JSON.parsecrashing the message subtree is addressed by the try/catch with a fallback to the skeleton UI.
| // Build core messages from client messages | ||
| const messages: CoreMessage[] = (clientMessages || []) | ||
| .filter((m: any) => m.role === 'user' || m.role === 'assistant') | ||
| .map((m: any) => ({ | ||
| role: m.role, | ||
| content: m.content | ||
| })) | ||
|
|
||
| // Trim to max messages | ||
| if (messages.length > maxMsgs) { | ||
| messages.splice(0, messages.length - maxMsgs) | ||
| } |
There was a problem hiding this comment.
Trimming via splice breaks the title heuristic and drops tool turns from the replay context.
Two issues co-located here:
messages.splice(0, messages.length - maxMsgs)mutates the array in place, thensaveChatAsynclater derives the chat title frommessages[0](lines 310-318). After trimming,messages[0]is whichever message survived the cut — often an old assistant turn, not the user's original prompt — so titles drift to assistant text like "Here is the analysis…" instead of the user's question.- The
.filter((m) => m.role === 'user' || m.role === 'assistant')then.map((m) => ({ role, content }))dropsm.toolInvocationsand anyrole === 'tool'turns. Replayed assistant messages that previously made tool calls now look like plain text to the model, making it likely to re-issue the same calls. If this is intentional, add a comment.
- // Build core messages from client messages
- const messages: CoreMessage[] = (clientMessages || [])
- .filter((m: any) => m.role === 'user' || m.role === 'assistant')
- .map((m: any) => ({
- role: m.role,
- content: m.content
- }))
-
- // Trim to max messages
- if (messages.length > maxMsgs) {
- messages.splice(0, messages.length - maxMsgs)
- }
+ // Capture the original first user message before trimming so chat title stays stable.
+ const originalFirstUserMessage = (clientMessages || []).find((m: any) => m.role === 'user')
+
+ const messages: CoreMessage[] = (clientMessages || [])
+ .filter((m: any) => m.role === 'user' || m.role === 'assistant' || m.role === 'tool')
+ .map((m: any) => ({
+ role: m.role,
+ content: m.content,
+ // preserve assistant tool calls so the model doesn't re-issue them
+ ...(m.toolInvocations ? { toolInvocations: m.toolInvocations } : {}),
+ }))
+
+ if (messages.length > maxMsgs) {
+ messages.splice(0, messages.length - maxMsgs)
+ }…and pass originalFirstUserMessage into saveChatAsync for title computation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/api/chat/stream/route.ts` around lines 53 - 64, The current in-place
splice and aggressive filter/map remove tool turns and mutate messages used
later for title heuristics: instead of mutating messages with
messages.splice(...) create a trimmed copy (e.g., trimmedMessages =
messages.slice(-maxMsgs)) so the original array (and original first user
message) remains intact; when building messages preserve tool turns and
toolInvocations by including role === 'tool' and copying m.toolInvocations in
the map that produces CoreMessage (rather than dropping those fields); finally
pass the preserved originalFirstUserMessage (capture messages[0] before
trimming) into saveChatAsync for title computation so the title heuristic uses
the original user prompt.
| const lastMsg = messages[messages.length - 1] | ||
| const lastText = typeof lastMsg?.content === 'string' ? lastMsg.content.trim().toLowerCase() : '' | ||
| if (lastText === 'what is a planet computer?' || lastText === 'what is qcx-terra?') { | ||
| const definition = lastText === 'what is a planet computer?' | ||
| ? `A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)` | ||
| : `QCX-Terra is a model garden of pixel level precision geospatial foundational models for efficient land feature predictions from satellite imagery. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`; | ||
|
|
||
| const encoder = new TextEncoder() | ||
| const stream = new ReadableStream({ | ||
| start(controller) { | ||
| controller.enqueue(encoder.encode(`0:${JSON.stringify(definition)}\n`)) | ||
| controller.enqueue(encoder.encode(`2:[{"relatedQueries":{"items":[]},"type":"related"}]\n`)) | ||
| controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`)) | ||
| controller.enqueue(encoder.encode(`d:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`)) | ||
| controller.close() | ||
| } | ||
| }) | ||
|
|
||
| saveChatAsync(chatId, userId, messages, definition) | ||
|
|
||
| return new Response(stream, { | ||
| headers: streamHeaders | ||
| }) | ||
| } |
There was a problem hiding this comment.
Hardcoded reply branch is fragile and disagrees with the system prompt source-of-truth.
- The Planet Computer string here says "Climate forecasting" — fine, but the original concern about exact
lastText === 'what is a planet computer?'matching still applies: a missing trailing?, an extra space, or any capitalization quirk in punctuation will fall through to the model and produce a different answer, creating visible inconsistency. Normalize aggressively (strip non-alphanumerics) or, better, move these definitions into the system prompt only. - This branch also bypasses
taskManager, RAG augmentation, and tool persistence, but still callssaveChatAsync(chatId, userId, messages, definition)with the rawdefinition— meaning these turns don't capture related queries or any tool outputs in DB. If the user later asks a follow-up, the rehydrated history won't include the canned answer's related queries.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/api/chat/stream/route.ts` around lines 75 - 98, The hardcoded reply
branch using lastText exact matches (variable lastText) is fragile and bypasses
taskManager/RAG/tool persistence while saving only the raw definition via
saveChatAsync; fix by normalizing the input (e.g., strip punctuation and
whitespace and lowercase lastText) or better yet move these canned definitions
into the system prompt/source-of-truth, and ensure the branch funnels responses
through the same pipeline as other chats: invoke taskManager or the normal
response composer so RAG augmentation and tool outputs are applied, and when
persisting use the same save path (e.g., saveChatAsync) but include the full
response metadata (related queries/tool outputs) rather than only the raw
definition so rehydrated history contains relatedQueries and usage fields;
locate changes around lastMsg/lastText handling, the conditional block that
constructs definition and the call to saveChatAsync to implement this.
| let relatedQueries = {} | ||
| if (!errorOccurred) { | ||
| const sanitizedMessages: CoreMessage[] = messages.map((m: any) => { | ||
| if (Array.isArray(m.content)) { | ||
| return { ...m, content: m.content.filter((part: any) => part.type !== 'image') } as CoreMessage | ||
| } | ||
| return m | ||
| }) | ||
| relatedQueries = await querySuggestor(sanitizedMessages) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm querySuggestor's return shape vs the consumer's expectation.
rg -nP -C5 "export\s+async\s+function\s+querySuggestor" lib/agents/query-suggestor.tsx
rg -nP -C5 "relatedQueries|PartialRelated" lib/schema/related.ts
rg -nP -C2 "related\.relatedQueries|relatedQueries\.items" components/Repository: QueueLab/QCX
Length of output: 1244
🏁 Script executed:
# Find where PartialRelated is defined (broader search)
rg -n "PartialRelated" --type ts --type tsxRepository: QueueLab/QCX
Length of output: 84
🏁 Script executed:
# Get complete querySuggestor function implementation
rg -n -A20 "export\s+async\s+function\s+querySuggestor" --type ts --type tsxRepository: QueueLab/QCX
Length of output: 84
🏁 Script executed:
# Check the route.ts file around lines 168-200 to see how annotation is emitted
fd "route.ts" | xargs head -n 210 | tail -n 50Repository: QueueLab/QCX
Length of output: 1650
🏁 Script executed:
# Find PartialRelated definition
rg -n "PartialRelated" -t ts -t jsRepository: QueueLab/QCX
Length of output: 1868
🏁 Script executed:
# Find the chat stream route specifically
fd "stream/route" -e ts -e jsRepository: QueueLab/QCX
Length of output: 38
🏁 Script executed:
# Get complete querySuggestor function
rg -n -A30 "export async function querySuggestor" -t ts -t jsRepository: QueueLab/QCX
Length of output: 2651
🏁 Script executed:
# Get the relatedSchema definition to understand the structure
rg -n -A15 "export.*relatedSchema" lib/schema/related.tsxRepository: QueueLab/QCX
Length of output: 341
🏁 Script executed:
# Get the chat stream route to see annotation emission around lines 168-200
cat -n app/api/chat/stream/route.ts | sed -n '160,205p'Repository: QueueLab/QCX
Length of output: 2049
Remove the shape mismatch concern; the structure is correct.
querySuggestor returns PartialRelated with top-level shape { items: [...] } (per relatedSchema in lib/schema/related.tsx). The annotation emits { type: 'related', relatedQueries } where relatedQueries is that object, and the consumer correctly accesses related.relatedQueries.items.
However, the bandwidth concern is valid: relatedQueries remains {} when errorOccurred is true (line 168), but the annotation is still emitted unconditionally (lines 199–200). The consumer guards against empty results, but streaming {type:'related', relatedQueries:{}} on every error wastes bandwidth. Consider only emitting the annotation when !errorOccurred && relatedQueries.items?.length > 0.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/api/chat/stream/route.ts` around lines 168 - 176, relatedQueries is left
as {} when errorOccurred and then emitted unconditionally, wasting bandwidth;
change logic so relatedQueries is only set and streamed when querySuggestor
returned non-empty results. Specifically: keep using the sanitizedMessages ->
querySuggestor(...) call but set relatedQueries to undefined/null by default,
and when emitting the annotation that currently sends { type: 'related',
relatedQueries } only do so if !errorOccurred && relatedQueries?.items?.length >
0; update the emission code that writes the { type: 'related', relatedQueries }
annotation to be conditional on that check.
| async function handleResolutionSearch({ | ||
| messages, chatId, userId, fileData, mapboxImageData, googleImageData, | ||
| timezone, drawnFeatures, location | ||
| }: { | ||
| messages: CoreMessage[] | ||
| chatId: string | ||
| userId: string | ||
| fileData: string | ||
| mapboxImageData?: string | ||
| googleImageData?: string | ||
| timezone?: string | ||
| drawnFeatures: DrawnFeature[] | ||
| location?: { lat: number; lng: number } | ||
| }) { | ||
| const content: CoreMessage['content'] = [ | ||
| { type: 'text', text: 'Analyze this map view.' }, | ||
| { type: 'image', image: fileData, mimeType: 'image/png' } | ||
| ] | ||
| messages.push({ role: 'user', content }) | ||
|
|
||
| try { | ||
| const streamResult = await resolutionSearch( | ||
| messages, | ||
| timezone || 'UTC', | ||
| drawnFeatures, | ||
| location | ||
| ) | ||
|
|
||
| const analysisResult = await streamResult.object | ||
|
|
||
| // Get related queries | ||
| const sanitizedMessages: CoreMessage[] = messages.map((m: any) => { | ||
| if (Array.isArray(m.content)) { | ||
| return { ...m, content: m.content.filter((part: any) => part.type !== 'image') } as CoreMessage | ||
| } | ||
| return m | ||
| }) | ||
| const relatedQueries = await querySuggestor(sanitizedMessages) | ||
|
|
||
| const encoder = new TextEncoder() | ||
| const stream = new ReadableStream({ | ||
| start(controller) { | ||
| // Send resolution result as annotation | ||
| const resAnnotation = { | ||
| type: 'resolution_search_result', | ||
| data: { | ||
| ...analysisResult, | ||
| image: fileData, | ||
| mapboxImage: mapboxImageData, | ||
| googleImage: googleImageData | ||
| } | ||
| } | ||
| controller.enqueue(encoder.encode(`2:[${JSON.stringify(resAnnotation)}]\n`)) | ||
|
|
||
| // Stream summary text | ||
| if (analysisResult.summary) { | ||
| controller.enqueue(encoder.encode(`0:${JSON.stringify(analysisResult.summary)}\n`)) | ||
| } | ||
|
|
||
| // Related queries | ||
| const relatedAnnotation = { type: 'related', relatedQueries } | ||
| controller.enqueue(encoder.encode(`2:[${JSON.stringify(relatedAnnotation)}]\n`)) | ||
|
|
||
| controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`)) | ||
| controller.enqueue(encoder.encode(`d:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`)) | ||
| controller.close() | ||
| } | ||
| }) | ||
|
|
||
| saveChatAsync(chatId, userId, messages, analysisResult.summary || '') | ||
|
|
||
| return new Response(stream, { | ||
| headers: streamHeaders | ||
| }) | ||
| } catch (error) { | ||
| console.error('Resolution search error:', error) | ||
| return new Response(JSON.stringify({ error: 'Resolution search failed' }), { | ||
| status: 500, | ||
| headers: { 'Content-Type': 'application/json' } | ||
| }) | ||
| } | ||
| } |
There was a problem hiding this comment.
handleResolutionSearch swallows post-streaming errors and persists with fire-and-forget.
Two issues remaining after the recent fixes:
- The outer
try { … new Response(stream) … } catchcovers only the synchronous setup. Once thestart(controller)callback executes (lines 258-283), any throw inside it cannot reach the outercatchand the stream just terminates without thee:/d:finish frames, leavinguseChatstuck inisLoading=true. Wrap thestartbody in try/catch and callcontroller.error(err)(or write a synthetice:/d:withfinishReason:'error') on failure. saveChatAsync(...)on Line 286 is fire-and-forget on the serverless runtime — same risk flagged on the main path. Useafter().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/api/chat/stream/route.ts` around lines 217 - 298, The stream's start
callback in handleResolutionSearch can throw without reaching the outer catch
and leaves useChat stuck; wrap the entire start(controller) body in a try/catch
and on error call controller.error(err) or enqueue synthetic finish frames (e:
and d: with finishReason:"error") before closing so the client receives a
terminal event; also stop doing fire-and-forget saveChatAsync(...) — schedule it
via after(() => saveChatAsync(chatId, userId, messages, analysisResult.summary
|| '')) (or the equivalent after wrapper used elsewhere) so the save runs safely
in the serverless background.
| const aiMessages: AIMessage[] = [] | ||
|
|
||
| for (const msg of messages) { | ||
| let content: CoreMessage['content'] = msg.content | ||
| if (Array.isArray(content)) { | ||
| content = (content as any[]).filter((part: any) => part.type !== 'image') as CoreMessage['content'] | ||
| } | ||
| aiMessages.push({ | ||
| id: (msg as any).id || nanoid(), | ||
| role: msg.role as AIMessage['role'], | ||
| content, | ||
| type: msg.role === 'user' ? 'input' : undefined | ||
| }) | ||
| } | ||
|
|
||
| // Add tool outputs | ||
| if (toolOutputs) { | ||
| for (const tool of toolOutputs) { | ||
| aiMessages.push({ | ||
| id: nanoid(), | ||
| role: 'tool', | ||
| content: JSON.stringify(tool.result), | ||
| name: tool.toolName, | ||
| type: 'tool' | ||
| }) | ||
| } | ||
| } | ||
|
|
||
| // Add response | ||
| if (answer) { | ||
| aiMessages.push({ | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: answer, | ||
| type: 'response' | ||
| }) | ||
| } | ||
|
|
||
| // Add related queries | ||
| if (relatedQueries) { | ||
| aiMessages.push({ | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: JSON.stringify(relatedQueries), | ||
| type: 'related' | ||
| }) | ||
| } | ||
|
|
||
| // Add end marker | ||
| aiMessages.push({ | ||
| id: nanoid(), | ||
| role: 'assistant', | ||
| content: 'end', | ||
| type: 'end' | ||
| }) |
There was a problem hiding this comment.
Synthetic tool/response/related/end rows get a fresh nanoid() every save → duplicates pile up on every turn.
The user/assistant messages now correctly preserve msg.id (line 328), but tool outputs (line 339), the answer (line 351), the related-queries blob (line 361), and the 'end' marker (line 370) are all built with a brand-new nanoid() on every saveChatAsync call. Since useChat replays the full conversation on each request and saveChatAsync is invoked per turn, prior turns' synthetic rows already in the DB cannot be matched by dbSaveChat's onConflictDoUpdate(id) clause — they get inserted again with new ids. Within a few turns the chat contains many duplicate tool/response/related/end rows, which (a) bloats the table, (b) breaks any deterministic "latest end / latest related" queries you may layer on top, and (c) means the getChatMessages rehydration path in app/search/[id]/page.tsx will replay duplicate assistant text bubbles.
Use a deterministic id derived from chatId + turn index + role/type (e.g., ${chatId}:${turnIndex}:response) so reruns land on the same row, or only persist the delta for the current turn rather than re-writing the full history.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/api/chat/stream/route.ts` around lines 320 - 374, The synthetic rows
pushed into aiMessages (see the toolOutputs loop, the block adding answer,
relatedQueries, and the final 'end' marker) use fresh nanoid() each save,
causing duplicate rows; change those id generations to deterministic IDs derived
from the chat context (e.g., use chatId + turnIndex + role/type like
`${chatId}:${turnIndex}:response`) or only persist the current turn's delta
instead of re-writing all messages so onConflictDoUpdate(id) will match existing
rows; update the id assignment for the tool output block (tool.toolName), the
assistant answer block, the related-queries block, and the end marker to use
that deterministic scheme.
| messages: Message[] | ||
| input: string | ||
| setInput: (value: string) => void | ||
| handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: any) => void |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
handleSubmit options is typed any.
The rest of the interface now accurately mirrors the SDK's signatures, but handleSubmit regresses to any. Use ChatRequestOptions here too for consistency and to actually catch caller mistakes.
- handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: any) => void
+ handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: ChatRequestOptions) => void📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: any) => void | |
| handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?: ChatRequestOptions) => void |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/chat-provider.tsx` at line 22, Update the handleSubmit signature
in the chat provider interface to use the proper ChatRequestOptions type instead
of any: change handleSubmit: (e?: React.FormEvent<HTMLFormElement>, options?:
any) => void to accept options?: ChatRequestOptions; ensure you import or
reference ChatRequestOptions from the SDK/module where other request types come
from (matching existing imports), update any related usages/calls to satisfy the
stronger type, and run typechecks to fix any mismatches.
| export function ChatProvider({ chatId, initialMessages, children }: ChatProviderProps) { | ||
| const { | ||
| messages, | ||
| input, | ||
| setInput, | ||
| handleSubmit, | ||
| append, | ||
| setMessages, | ||
| isLoading, | ||
| data, | ||
| } = useChat({ | ||
| api: '/api/chat/stream', | ||
| id: chatId, | ||
| initialMessages, | ||
| body: { chatId }, | ||
| }) | ||
|
|
||
| const annotations = useMemo(() => { | ||
| if (!data) return [] | ||
| return (data as JSONValue[]).filter(isAnnotation) | ||
| }, [data]) |
There was a problem hiding this comment.
No error surfacing — 401s and stream failures vanish into the console.
useChat() also returns error and accepts onError/onResponse callbacks, none of which are pulled or exposed here. Combined with the route returning a bare 401 Unauthorized JSON (see app/api/chat/stream/route.ts:22-27), this is exactly the failure mode reported on the preview deployment in the PR comments: the POST 401 happened silently and the UI showed no token output. At minimum, expose error through the context (and surface it in a toast or banner in ChatMessages/ChatPanel) so unauthenticated/failed requests don't look like "no response" to the user.
🔧 Suggested addition
- const {
+ const {
messages,
input,
setInput,
handleSubmit,
append,
setMessages,
isLoading,
data,
+ error,
+ reload,
+ stop,
} = useChat({
api: '/api/chat/stream',
id: chatId,
initialMessages,
body: { chatId },
+ onError: (err) => console.error('[chat-provider] stream error', err),
})…and add error/reload/stop to ChatContextValue and the memoized value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/chat-provider.tsx` around lines 47 - 67, The ChatProvider
currently ignores useChat's error and control callbacks so 401s and stream
failures are swallowed; update ChatProvider to extract error, reload and stop
(and optionally onError/onResponse handlers) from useChat and include them in
the memoized context value and ChatContextValue type so consumers (e.g.,
ChatMessages/ChatPanel) can show a toast/banner on error and provide retry/stop
controls; ensure the memoized value returned by ChatProvider contains error,
reload, and stop alongside messages/input/append/handleSubmit/isLoading/data.
| {searchResults ? ( | ||
| <> | ||
| <Section size="sm" className="pt-2 pb-0"> | ||
| <ToolBadge tool="search">{`${searchResults.searchParameters.q}`}</ToolBadge> |
There was a problem hiding this comment.
Defensive access to searchParameters.q.
If parsing succeeds but the payload shape is partial (e.g., upstream returned a different schema), searchResults.searchParameters.q will throw inside render. Use optional chaining and a fallback to keep the section robust against malformed but well-formed-JSON payloads.
- <ToolBadge tool="search">{`${searchResults.searchParameters.q}`}</ToolBadge>
+ <ToolBadge tool="search">{`${searchResults.searchParameters?.q ?? ''}`}</ToolBadge>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <ToolBadge tool="search">{`${searchResults.searchParameters.q}`}</ToolBadge> | |
| <ToolBadge tool="search">{`${searchResults.searchParameters?.q ?? ''}`}</ToolBadge> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/video-search-section.tsx` at line 28, The render currently
accesses searchResults.searchParameters.q directly which can throw if
searchParameters is missing; update the usage inside ToolBadge (component:
ToolBadge) to use optional chaining and a safe fallback (e.g.,
searchResults?.searchParameters?.q ?? '' or a placeholder like '—') so rendering
remains robust against partial/malformed payloads while keeping the rest of the
JSX unchanged.
| const newSegments: CoreMessage[] = [ | ||
| { | ||
| role: 'assistant', | ||
| content: [{ type: 'text', text: fullResponse }, ...toolCalls], | ||
| } | ||
| ] | ||
|
|
||
| if (toolResponses.length > 0) { | ||
| messages.push({ role: 'tool', content: toolResponses }) | ||
| newSegments.push({ role: 'tool', content: toolResponses }) | ||
| } | ||
|
|
||
| return { result, fullResponse, hasError, toolResponses } | ||
| return { result, fullResponse, hasError, toolResponses, newSegments } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Confirm the retry loop in the route handler that consumes newSegments.
rg -nP -C5 'newSegments|researcher\(' app/api/chat/stream/route.tsRepository: QueueLab/QCX
Length of output: 861
🏁 Script executed:
#!/bin/bash
# Examine the researcher function implementation, especially around newSegments construction
cat -n lib/agents/researcher.tsx | sed -n '1,50p' # Header and imports
echo "---"
cat -n lib/agents/researcher.tsx | sed -n '100,180p' # Context around lines 134-145Repository: QueueLab/QCX
Length of output: 4359
🏁 Script executed:
#!/bin/bash
# Look for how fullResponse and toolCalls/toolResponses are populated
rg -n "fullResponse|toolCalls|toolResponses" lib/agents/researcher.tsx | head -30Repository: QueueLab/QCX
Length of output: 611
🏁 Script executed:
#!/bin/bash
# Check if there are any guards or validation around empty text or tool mismatches elsewhere
rg -n "type.*text|toolCallId" lib/agents/researcher.tsx | head -20Repository: QueueLab/QCX
Length of output: 129
🏁 Script executed:
#!/bin/bash
# Check ToolCallPart and ToolResultPart structure - they should have toolCallId
rg -n "toolCallId|ToolCallPart|ToolResultPart" lib/agents/researcher.tsxRepository: QueueLab/QCX
Length of output: 186
🏁 Script executed:
#!/bin/bash
# Check if tools are actually defined with IDs and look at the type definitions
fd -t f "tools" lib/agents/ | head -5Repository: QueueLab/QCX
Length of output: 38
🏁 Script executed:
#!/bin/bash
# Trace a complete scenario: look for examples of how tool-result matches tool-call in the ai package or codebase
rg -n "toolCallId" . --type ts --type tsx | head -10Repository: QueueLab/QCX
Length of output: 84
🏁 Script executed:
#!/bin/bash
# Check if there's package.json or any ai package reference
cat -n package.json | grep -A2 -B2 '"ai"'Repository: QueueLab/QCX
Length of output: 260
🏁 Script executed:
#!/bin/bash
# Look for any tool execution/result handling to understand the matching mechanism
rg -n "toolResponses" lib/agents/researcher.tsx -A5 -B5Repository: QueueLab/QCX
Length of output: 1016
🏁 Script executed:
#!/bin/bash
# Check how tool results are handled in the route that uses this
rg -n "toolOutputs|toolResponses|allToolOutputs" app/api/chat/stream/route.ts -B3 -A3Repository: QueueLab/QCX
Length of output: 1832
Unmatched tool calls can corrupt the message history on retry attempts.
When an 'error' event occurs in the stream after a 'tool-call' but before its corresponding 'tool-result', the newSegments will contain tool calls without matching tool results. Because the route handler appends newSegments to the message history whenever errorOccurred is true (line 157–158), the next retry attempt will inherit an invalid message sequence with dangling tool calls.
Additionally, if the stream ends without any text deltas or error messaging, fullResponse remains empty, resulting in an assistant message with an empty text part.
The proposed guard approach—filtering tool calls to only include those with matching results—is correct. However, verify that ToolCallPart and ToolResultPart objects from the ai package v4.3.19 use the toolCallId property for matching; the codebase shows only toolName in actual usage.
🛡️ Proposed guard (verify property names)
- const newSegments: CoreMessage[] = [
- {
- role: 'assistant',
- content: [{ type: 'text', text: fullResponse }, ...toolCalls],
- }
- ]
-
- if (toolResponses.length > 0) {
- newSegments.push({ role: 'tool', content: toolResponses })
- }
+ const assistantContent: any[] = []
+ if (fullResponse) {
+ assistantContent.push({ type: 'text', text: fullResponse })
+ }
+ // Only include tool_calls that have a matching tool_result, to keep the
+ // conversation valid for retry attempts in the route handler.
+ const matchedToolCalls = toolCalls.filter(tc =>
+ toolResponses.some(tr => tr.toolCallId === tc.toolCallId)
+ )
+ assistantContent.push(...matchedToolCalls)
+
+ const newSegments: CoreMessage[] = []
+ if (assistantContent.length > 0) {
+ newSegments.push({ role: 'assistant', content: assistantContent })
+ }
+ if (toolResponses.length > 0) {
+ newSegments.push({ role: 'tool', content: toolResponses })
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/agents/researcher.tsx` around lines 134 - 145, newSegments can include
tool-calls without matching tool-results when a stream errors after a
'tool-call' but before its 'tool-result', and also can create an assistant
message with empty text when fullResponse is empty; fix by filtering toolCalls
to only include those that have a corresponding toolResult (match by the
ToolCallPart/ToolResultPart identifier — verify and use toolCallId if present,
otherwise toolName) before building newSegments, and skip adding an assistant
text part when fullResponse is empty; update the logic around the variables
newSegments, toolCalls, toolResponses, ToolCallPart, ToolResultPart,
fullResponse, and the errorOccurred handling so only paired
tool-call/tool-result entries are appended to message history.
| const result = await nonexperimental_streamText({ | ||
| model: (await getModel()) as LanguageModel, | ||
| maxTokens: 2500, | ||
| system: systemToUse, // Use the dynamic or default system prompt | ||
| system: systemToUse, | ||
| messages | ||
| }) | ||
|
|
||
| for await (const text of result.textStream) { | ||
| if (text) { | ||
| fullResponse += text | ||
| streamText.update(fullResponse) | ||
| } | ||
| } | ||
|
|
||
| streamText.done() | ||
|
|
||
| return fullResponse |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🌐 Web query:
In Vercel AI SDK v4, does generateText support the same model/system/messages/maxTokens parameters as streamText, and is it the recommended primitive for final-string-only consumers?
💡 Result:
Yes, in Vercel AI SDK v4, generateText supports the same model, system, messages, and maxTokens (as maxOutputTokens or maxTokens) parameters as streamText. It is the recommended primitive for final-string-only consumers, such as non-interactive use cases like drafting emails, summarizing web pages, or agents using tools, where you await the full generated text rather than streaming it.
Citations:
- 1: https://ai-sdk.dev/docs/reference/ai-sdk-core/generate-text
- 2: https://ai-sdk.dev/docs/ai-sdk-core/generating-text
- 3: https://v4.ai-sdk.dev/docs/ai-sdk-core/generating-text
- 4: https://v4.ai-sdk.dev/docs/reference/ai-sdk-core/generate-text
- 5: https://v4.ai-sdk.dev/docs/reference/ai-sdk-core/stream-text
- 6: https://sdk.vercel.ai/docs/reference/ai-sdk-core/stream-text
- 7: https://sdk.vercel.ai/docs/reference/ai-sdk-core/generate-text
- 8: https://v4.ai-sdk.dev/docs/ai-sdk-core/prompts
Prefer generateText now that no caller consumes deltas.
With UI streaming removed, the for-await loop only exists to reassemble the final string. generateText returns the full text directly and is the idiomatic primitive for final-value-only consumers (parallel to the streamObject → generateObject swaps elsewhere in this PR).
♻️ Proposed refactor
-import { CoreMessage, LanguageModel, streamText as nonexperimental_streamText } from 'ai'
+import { CoreMessage, LanguageModel, generateText } from 'ai'
import { getModel } from '../utils'
@@
-export async function writer(
+export async function writer(
dynamicSystemPrompt: string,
messages: CoreMessage[]
) {
- let fullResponse = ''
-
const default_system_prompt = `...`
@@
const systemToUse = dynamicSystemPrompt && dynamicSystemPrompt.trim() !== '' ? dynamicSystemPrompt : default_system_prompt;
- const result = await nonexperimental_streamText({
+ const { text } = await generateText({
model: (await getModel()) as LanguageModel,
maxTokens: 2500,
system: systemToUse,
messages
})
- for await (const text of result.textStream) {
- if (text) {
- fullResponse += text
- }
- }
-
- return fullResponse
+ return text
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/agents/writer.tsx` around lines 22 - 35, Replace the
nonexperimental_streamText usage and manual for-await reassembly with
generateText: call generateText({ model: await getModel() as LanguageModel,
maxTokens: 2500, system: systemToUse, messages }) await its result and return
the returned full text directly instead of looping over result.textStream and
concatenating into fullResponse; remove the fullResponse variable and the
for-await loop around nonexperimental_streamText.
Summary
ai/rscpattern (server-pushed React components viacreateStreamableUI) with the stableai/uipattern (useChathook + route handler + client-side rendering)/api/chat/streamroute handler usingstreamText+ data stream protocol, replacing thesubmitserver action fromcreateAIai/rscAPIs — zero remainingai/rscimports after this PRKey Changes
Server (6 files):
app/api/chat/stream/route.ts— New streaming endpoint with task manager -> researcher -> writer -> query suggestor orchestrationlib/agents/tools/*— Tools return JSON data only (removeduiStreamdependency)lib/agents/*— Agents return data/stream results only (removed UI rendering)Client (15 files):
components/chat-provider.tsx— NewChatProviderwrappinguseChatfor shared statecomponents/tool-result-renderer.tsx— Client-side tool result -> UI component mappinguseUIState/useAIState/useActions/useStreamableValuetouseChatContextBotMessageacceptsstringinstead ofStreamableValue<string>CollapsibleMessageacceptsbooleaninstead ofStreamableValue<boolean>Pages (2 files):
app/page.tsx,app/search/[id]/page.tsx—<AI>provider replaced with<ChatProvider>Removed:
createAI,createStreamableUI,createStreamableValue,getMutableAIState,getAIStateuseUIState,useAIState,useActions,useStreamableValue,readStreamableValuegetUIStateFromAIStatereconstruction functiononSetAIState/onGetUIStatecallbacksStats: 29 files changed, 830 insertions, 1525 deletions
Test plan
npx next buildpasses (confirmed in this PR)grep -rn "ai/rsc"returns zero resultsCloses #59
Generated with Claude Code
Summary by CodeRabbit
Refactor
New Features